Sie sind auf Seite 1von 165

LESSON 1: INTRODUCTION TO RISK

Chapter Objectives
Discuss different meanings of the term risk. Describe major types of business risk and personal risk. Explain and compare pure risk to other types of risk. Outline the risk management process and describe major risk
Expected loss

UNIT I CHAPTER 1 RISK & ITS MANAGEMENT

Expected loss Uncertainty (vaiability around the expected loss)


One situation is riskier than other if it has greater

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Discuss organization of the risk management function within

business. Risk Different Meanings Of Risk The term risk has a variety of meanings in business and everyday life. At its most general leve1, risk is used to describe any situation where there is uncertainty about what outcome will occur. Life is obviously very risky. Even the short-term future is often highly uncer-tain. In probability and statistics, financial management, and investment management, risk is often used in a more specific sense to indicate possible variability in outcomes around some expected value.

Do cu

We will develop the ideas of expected value and risk as reflecting variability around the expected value in the next few chapters. For now it is sufficient for you to think of the expected value as the outcome that would occur on average if a person or business were repeatedly exposed to the same type of risk. If you have not yet encountered these concepts in statistics or fi-nance classes, the following example from the sports world might help. Allen Iverson has averaged about 30 points per game in his career in the National Basketball Association. As we write this, he shows little sign of slowing down. It is therefore reasonable to assume that the expected value of his total points in any given game is about 30 points. Risk, in the sense of variability around the expected value, is clearly present. He might score 50 points or even higher in a particular game, or he might score as few as 10 points. In other situations, the term risk may refer to the expected losses associated with a situ-ation. In insurance markets, for example, it is common to refer to high-risk policyholders. The meaning of risk in this context is that the expected value of losses to be paid by the in-surer (the expected loss) is high. As another example, California often is described as hav-ing a high risk of earthquake. While this statement might encompass the notion of variability around the expected value, it usually simply means that Californias expected loss from earthquakes is high relative to other states.

In summary, (see Figure 1.1) risk is sometimes used in a specific sense to describe vari-ability around the expected value and other times to describe the expected losses. We em-ploy each of these meanings in this book because it is customary to do so in certain types of risk management and in the insurance business. The particular meaning usually will be obvious from the context. One situation is riskier than other if it has greater

11D.571.3 1

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Risk Is Costly Regardless of the specific meaning of risk being used, greater risk usually implies greater cost. To illustrate the cost of risk we use a simple example: Suppose that two identical homes are in different but equally attractive locations. The structures have the same value, say $100,000, and initially there is no risk of damage to either house. Then scientists announce that a meteor might hit the earth in the coming week and that one house is in the potential impact area. We would naturally say that one house now has greater risk than the other. Lets assume that everyone agrees that the probability of one house being hit by the me-teor is 0.1 and that the probability of the other house being hit is zero. Also assume that the house woul9- be completely destroyed if it were hit (all $100,000 would be lost). Then the expected property loss at one house is greater by an amount equal to 0.1 times $100,000, or $10,000. If the owner were to sell the house immediately following the release of news about the meteor, potential buyers would naturally pay less than $100,000 for the house. Ra-tional people would pay at least $10,000 less, because that is the expected loss from the me-teor. Thus, greater risk-in the sense of higher expected losses-is costly to the original homeowner. The value of the house would drop by at least the expected loss. In addition to greater expected losses, one homeowner has greater uncertainty in the sense that potential outcomes have greater variation. At the end of the week, one house will be worth $100,000 with certainty, but the other house could be worth zero or $100,000. This greater uncertainty about the value of the house also is likely to impose costs on the owner. Because of the greater uncertainty, potential buyers might require a price decrease in ex-cess of the expected loss ($10,000). Lets say the additional price drop is $5,000. Thus, greater risk-in the sense of greater uncertainty-is also costly to the original homeowner. To summarize, this example illustrates that both meanings of risk depicted in Figure 1.1 are costly. In this example, the value of the house declined by the expected loss (the first meaning of risk) plus an additional amount due to increased uncertainty (the second mean-ing of risk). As you will see throughout this book, risk management is concerned with de-creasing the cost of risk. Direct versus Indirect Expected Losses When considering the potential losses from a risky situation,

al

management methods.

Uncertainty (variability around the expected loss)

11.581 1

Moreover, if sales or production are reduced in response to direct losses, certain types of normal operating expenses (known as continuing expenses) may not decline in proportion to the reduction in revenues, thus increasing indirect losses. If a long interruption in pro-duction would cause many customers to switch suppliers, or if a firm has binding contrac-tual commitments to supply products, it also may be desirable for the firm to increase operating costs above normal levels following direct losses. For example, some businesses might find it desirable to maintain production by leasing replacement equipment at a higher cost so as to avoid loss of sales. The increased operating cost would create an indirect loss. Similarly, a business that decides to recall defective products that have produced liability claims will incur product recall expenses and perhaps increased advertising costs to reduce damage to the firms reputation. Other forms of indirect losses include the possibility that the business will face a higher cost of obtaining funds from lenders or from new equity issues following large direct losses. In some cases, the higher costs of raising capital will cause the firm to forgo making oth-erwise profitable investments. Finally, in the case of severe direct and indirect losses, the firm might have to reorganize or be liquidated through costly legal proceedings under bankruptcy law.
Loss of normal Profit (net cash flow)

Do cu

Extra operating expenses

Types of Risks Facing Businesses & Individuals Business Risk Broadly defined, business risk management is concerned with possible reductions in business value from any source. Business value to shareholders, as reflected in the value of the firms common stock, depends fundamentally on the expected size, timing, and risk (variability) associated with the firms future net cash flows (cash inflows less cash out-flows). Unexpected changes in expected future net cash flows are a major source of fluc-tuations in business value. In particular, unexpected
2 11.581

ww Co w.p m dfw P iza D rd. F com Tr i


Credit Risk
Higher cost of funds and foregone investments Bankruptcy costs (Legal fees)

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

you must consider indirect losses that arise in addition to direct losses. In the previous example, if the meteor destroyed the house, the direct loss would be $100,000. Indirect losses arise as a consequence of direct losses. If the house were destroyed, the owner would likely have additional expenses, such as hotel and restaurant costs; these additional expenses would be indirect losses. As another ex-ample, when a persons car is damaged, the time spent getting it repaired is an indirect loss. For businesses, indirect losses are extremely important. Indeed, as we discuss in later chapters, the possibility of indirect losses is one of the main reasons that businesses try to reduce risk. Figure 1.2 summarizes the major types of indirect losses that can arise from the risks faced by businesses. For example, damage to productive assets can produce an indirect loss by reducing or eliminating the normal profit (net cash flow) that the asset would have generated if the damage had not occurred. Large direct losses also can lead to indirect losses if they threaten the viability of the business and thereby reduce the willingness of customers and suppliers to deal with the business or change the terms (prices) at which they transact.

reductions in cash inflows or increases in cash outflows can significantly reduce business value. The major business risks that give rise to variation in cash flows and business value are price risk, credit risk, and pure risk (see Figure 1.3). Fig 1.3: Major Types of Business Risk

Price Risk

Credit Risk

Pure Risk

Output Price Risk

Input Price Risk

Damage to Assets

Legal Liability Commodity Price Risk

Exchange Rate Risk

Interest Rate Risk

Price Risk Price risk refers to uncertainty over the magnitude of cash flows due to possible changes in output and input prices. Output price risk refers to the risk of changes in the prices that a firm can demand for its goods and services. Input price risk refers to the risk of changes in the prices that a firm must pay for labor, materials, and other inputs to its production process. Analysis of price risk associated with the sale and production of existing and fu-ture products and services plays a central role in strategic management. Three specific types of price risk are commodity price risk, exchange rate risk, and in-terest rate risk. Commodity price risk arises from fluctuations in the prices of commodities, such as coal, copper, oil, gas, and electricity, which are inputs for some firms and outputs for others. Given the globalization of economic activity, output and input prices for many firms also are affected by fluctuations in foreign exchange rates. Output and input prices also can fluctuate due to changes in interest rates. For example, increases in interest rates may alter a firms revenues by affecting both the terms of credit allowed and the speed with which customers pay for products purchased on credit. Changes in interest rates also affect the firms cost of borrowing funds to finance its operations. The risk that a firms customers and the parties to which it has lent money will delay or fail to make promised payments is known as credit risk. Most firms face some credit risk for account receivables. The exposure to credit risk is particularly large for financial institutions, such as commercial banks, that routinely make loans that are subject to risk of default by the borrower. When firms borrow money, they in turn expose lenders to credit risk (i.e., the risk that the firm will default on its promised payments). As a consequence, borrowing exposes the firms owners to the risk that the firm will be unable to pay its debts and thus be forced into bankruptcy, and the firm generally will have to pay more to borrow money as credit risk increases. Pure Risk The risk management function in medium-to-large corporations (and the term risk man-agement) has traditionally focused on the management of what is known as pure risk. As summarized in
11D.571.3 2

Copy Right: Rai University

al

Worker Injury

Employee Benefit

Figure 1.3, the major types of pure risk that affect businesses include: 1. The risk of reduction in value of business assets due to physical damage, theft, and expropriation (i.e., seizure of assets by foreign governments). 2. The risk of legal liability for damages for harm to customers, suppliers, shareholders, and other parties. 3. The risk associated with paying benefits to injured workers under workers compen-sation laws and the risk of legal liability for injuries or other harms to employees that are not governed by workers compensation laws. . 4. The risk of death, illness, and disability to employees (and sometimes family mem-bers) for which businesses have agreed to make payments under employee benefit plans, including obligations to employees under pension and other retirement savings plans. Personal Risk The risks faced by individuals and families can be classified in a variety of ways. We classify personal risk into six categories: earnings risk, medical expense risk, liability risk, physical asset risk, financial asset risk, and longevity risk. Earnings risk refers to the potential fluctuation in a familys earnings, which can occur as a result of a decline in the value of an in-come earners productivity due to death, disability, aging, or a change in technology. A familys expenses also are uncertain. Health care costs and liability suits, in particular, can cause large unexpected expenses. A family also faces the risk of a loss in the value of the phys-ical assets that it owns. Automobiles, homes, boats, and computers can be lost, stolen, or dam-aged. Financial assets values also are subject to fluctuation due to changes ill inflation and changes in the real values of stocks and bonds. Finally, longevity risk refers to the possibility that retired people will outlive their financial resources. Often individuals obtain advice about personal risk management from professionals, such as insurance agents, accountants, lawyers, and financial planners.

ity of losses through actions that alter the underlying causes (e.g., by taking steps to reduce the probability of fire or lawsuit). In comparison, while firms can take a variety of steps to reduce their exposure or vulnerability to price risk, the underlying causes of some impor-tant types of price changes are largely beyond the control of individual firms (e.g., eco-nomic factors that cause changes in foreign exchange rates, market wide changes in interest rates, or aggregate consumer demand). 3. Businesses commonly reduce uncertainty and finance losses associated with pure risk by purchasing contracts from insurance companies that specialize in evaluating and bearing pure risk. The prevalence of insurance in part reflects the firm-specific nature of losses caused by pure risk. The fact that events that cause large losses to a given firm com-monly have little effect on losses experienced by other firms facilitates risk reduction by di-versification, which is accomplished with insurance contracts (see Chapters 5 and 6). Insurance contracts generally are not used to reduce uncertainty and finance losses associated with price risk (and many types of credit risk). Price risks that can simultaneously pro-duce gains for many firms and losses for many others are commonly reduced with financial derivatives, such as forward and futures contracts, option contracts, and swaps. With these contracts, much of the risk of loss is often shifted to parties that have an opposite exposure to the particular risk. 4. Losses from pure risk usually are not associated with offsetting gains for other par-ties. In contrast, losses to businesses that arise from other types of risk often are associated with gains to other parties. For example, an increase in input prices harms the purchaser of the inputs but benefits the seller. Likewise, a decline in the dollars value against foreign currencies can harm domestic importers but benefit domestic exporters and foreign im-porters of U S. goods. One implication of this difference between pure risk and price risk is that losses from pure risk reduce the total wealth in society, whereas fluctuations in output and input prices need not reduce total wealth. In addition, and as we hinted above, the fact that price changes often produce losses for some firms and gains for others in many cases allows these firms to reduce risk by taking opposite positions in derivative contracts. While many of the details concerning pure risk and its management differ from other types of risk, it is nonetheless important for you to understand that pure risk and its man-agement are conceptually similar, if not identical, to other types of risk and their management. To make this concrete, consider the case of a manufacturer that uses oil in the production of consumer products. Such a firm faces the risk of large losses from product liability lawsuits if its products harm consumers, but it also faces the risk of potentially large losses from oil price increases. The business can manage the expected cost of product lia-bility settlements or judgments by making the products design safer or by providing safety instructions and warnings. While the business might not be able to do anything to reduce the likelihood or size of increases in oil prices, it might be able to reduce its exposure to losses from oil price increases by adopting a flexible technology that allows low cost con-version to other sources of energy. The business might purchase product liability insurance to reduce its liability risk; it

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

Comparison of Pure Risk and Its Management with Other Types of Risk Common (but not necessarily distinctive) features of pure risk include the following:

1. Losses nom destruction of property, legal liability, and employee injuries or illness Often have the potential to be very large relative to a businesss resources. While business value can increase if losses nom pure risk turns out to be lower than expected, the maximum possible gain in these cases is usually relatively small. In contrast, the potential reduction in business value from losses greater than the expected value can be very large and even threaten the firms viability. 2. The underlying causes of losses associated with pure risk, such as the destruction of a plant by the explosion of a steam boiler or product liability suits from consumers injured by a particular product, are often largely specific to a particular firm and depend on the firms actions. As a result, the underlying causes of these losses are often subject to a sig-nificant degree of control by businesses; that is, firms can reduce the frequency and sever-

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

Increases in business risk of all types and dramatic growth in the use of financial deriv-atives for hedging price risks in recent years have stimulated substantial growth in the scope and efforts devoted to overall business risk management. It has become increasingly im-portant for managers that focus on pure risk to understand the management of other types of business risk. Similarly, general managers and managers of other types of risk need to understand how pure risk affects specific areas of activity and the business as a whole. Risk Management The Risk Management Process Regardless of the type of risk being considered, the risk management process involves sev-eral key steps: 1. Identify all significant risks.

2. Evaluate the potential frequency and severity of losses. 3. Develop and select methods for managing risk. 4. Implement the risk management methods chosen.

Do cu
4

5. Monitor the performance and suitability of the risk management methods and strategies on an ongoing basis. The same general framework applies to business and individual risk management. You will learn more about major exposures to losses from pure risk, risk evaluation, and the selection and implementation of risk management methods in subsequent chapters. Chapter 2 discusses risk management objectives for businesses and individuals. Risk Management Methods Figure 1.5 summarizes the major methods of managing risk. These methods, which _e not mutually exclusive, can be broadly classified as (1) loss control, (2) loss financing, and (3) internal risk reduction. Loss control and internal risk reduction commonly involve de-cisions to invest (or forgo investing) resources to reduce expected losses. They are concep-tually equivalent to other investment decisions, such as a firms decision to buy a new plant or an individuals decision to buy a computer. Loss financing decisions refer to decisions about how to pay for losses if they do occur. Fig 1.5: Major Risk Management Methods

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

might hedge its risk of loss from oil price increases using oil futures contracts. While the concepts and broad risk management strategies are the same for pure risk and other types of business risk, the specific characteristics of pure risk and the significant re-liance on insurance contracts as a method of managing these risks generally lead to their management by personnel with specialized expertise. Major areas of expertise needed for pure risk management include risk analysis, safety management, insurance contracts, and other methods of reducing pure risk, as well as broad financial and managerial skills. The insurance business, with its principal function of reducing pure risk for businesses and in-dividuals, employs millions of people and is one of the largest industries in the United States (and other developed countries). In addition, pure risk management and insurance have a major effect on many other sectors of the economy, such as the legal sector, medical care, real estate lending, and consumer credit.

Loss Financing Internal Risk reduction Loss Control Retention & Self Insurance Reduced Level of Risky Activity Diversification

Insurance
Increased precautions Other contractual Risk transfers Hedging Loss Control Investments in Information

Actions that reduce the expected cost of losses by reducing the frequency of losses and/or the severity (size) of losses that occur are known as loss control. Loss control also is some-times known as risk control. 6 Actions that primarily affect the frequency of losses are com-monly called loss prevention methods. Actions that primarily influence the severity of losses that do occur are often called loss reduction methods. An example of loss prevention would be routine inspection of aircraft for mechanical problems. These inspections help re-duce the frequency of crashes; they have little impact on the magnitude of losses for crashes that occur. An example of loss reduction is the installation of heat- or smoke-activated sprin-kler systems that are designed to minimize fire damage in the event of a fire. Many types of loss control influence both the frequency and severity of losses and can-not be readily classified as either loss prevention or loss reduction. For example, thorough safety testing of consumer products wi1l likely reduces the number of injuries, but it also could affect the severity of injuries. Similarly, equipping automobiles with airbags in most cases should reduce the severity of injuries, but airbags also might influence the frequency of injuries. Whether injuries increase or decrease depends on whether the number of injuries that are completely prevented for accidents that occur exceeds the number of injuries that might be caused by airbags inflating at the wrong time or too forcefully, as well as any in-crease in accidents and injuries that could occur if protection by airbags causes some driv-ers to drive less safely. Viewed from another perspective, there are two general approaches to loss control: (1) reducing the level of risky activity, and (2) increasing precautions against loss for ac-tivities that are undertaken. First, exposure to loss can be reduced by reducing the level of risky activities, for example, by cutting back production of risky products or shifting atten-tion to less risky product lines. Limiting the level of risky activity primarily affects the fre-quency of losses. The main cost of this strategy is that it forgoes any benefits of the risky activity that would have been achieved apart from the risk involved. In the limit, exposure to losses can be completely eliminated by reducing the level of activity to zero; that is, by not engaging in the activity at all. This strategy is called risk avoidance. As a specific example of limiting the level of risky activity, consider a trucking firm that hauls toxic chemicals that might harm people or the environment in the case of an accident and thereby produce claims for damages. This firm could reduce the frequency of liability

al

11D.571.3

claims by cutting back on the number of shipments that it hauls. Alternatively, it could avoid the risk completely by not hauling toxic chemicals and instead hauling nontoxic substances (such as clothing or, apart from cholesterol, cheese). An example from personal risk man-agement would be a person who flies less frequently to reduce the probability of dying in a plane crash. This risk could be completely avoided by never flying. Of course, alternative transportation methods might be much riskier (e.g., driving down Interstate 95 from New York to Miami the day before Thanksgiving-along with many long-haul trucks, including those transporting toxic chemicals). The second major approach to loss control is to increase the amount of precautions (level of care) for a given level of risky activity. The goal here is to make the activity safer and thus reduce the frequency and/or severity of losses. Thorough testing for safety and instal-lation of safety equipment are examples of increased precautions. The trucking firm in the example above could give-its drivers extensive training in safety, limit the number of hours driven by a driver in a day, and reinforce containers to reduce the likelihood of leakage. In-creased precautions usually involve direct expenditures or other costs (e.g., the increased time and attention required to drive, an automobile more safely). Concept Checks 1. Explain how the two major approaches to loss control (reducing risky activity and in-creasing precautions) could be used to reduce the risk of injury to construction firm employees. 2. How could these two approaches be used to reduce the risk of contracting a sexually transmitted disease? Loss Financing Methods used to obtain funds to pay for or offset losses that occur are known as loss fi-nancing (sometimes called risk financing). There are four broad methods of financing losses: (1) retention, (2) insurance, (3) hedging, and

Do cu

(4) other contractual risk transfers. These approaches are not mutually exclusive; that is, they often are used in combination.

With retention, a business or individual retains the obligation to pay for part or all of the losses. For example, a trucking company might decide to retain the risk that cash flows will drop due to oil price increases. When coupled with a formal plan to fund losses for medium -to large businesses, retention often is called se1finsurance. Firms can pay retained losses using either internal or external funds. Internal funds in-clude cash flows from ongoing activities and investments in liquid assets that are dedicated to financing losses. External sources of funds include borrowing and issuing new stock, but these approaches may be very costly following large losses. Note that these approaches still involve retention even though they employ external sources of funds. For example, the firm must pay back any funds borrowed to finance losses. When new stock is issued, the firm must share future profits with new stockholders.

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


1. diversification, and
Copy Right: Rai University

The second major method of financing losses is the purchase of insurance contracts. As you most likely already know, the typical insurance contract requires the insurer to provide funds to pay for specified losses (thus financing these losses) in exchange for receiving a premium from the purchaser at the inception of the contract. Insurance contracts reduce risk for the buyer by transferring some of the risk of loss to the insurer. Insurers in turn reduce risk through diversification. For example, they sell large numbers of contracts that provide coverage for a variety of different losses. The third broad method of loss financing is hedging. As noted above, financial deriva-tives, such as forwards, futures, options, and swaps, are used extensively to manage various types of risk, most notably price risk. These contracts can be used to hedge risk; that is, they may be used to offset losses that can occur from changes in interest rates, commodity prices, foreign exchange rates, and the like. Some derivatives have begun to be used in the man-agement of pure risk, and it is possible that their use in pure risk management will expand in the future. Individuals and small businesses do relatively little hedging with derivatives. At this point, it is useful to illustrate hedging with a very simple example. Firms that use oil in the production process are subject to loss from unexpected increases in oil prices; oil producers are subject to loss from unexpected decreases in oil prices. Both types of firms can hedge their risk by entering into a forward contract that requires the oil producer to provide the oil user with a specified amount of oil on a specified future delivery date at a predetermined price (known as the forward price), regardless of the market price of oil on that date. Because the forward price is agreed upon when the contract is written, the oil user and the oil producer both reduce their price risk. The fourth major method of loss financing is to use one or more of a variety of other con-tractual risk transfers that allow businesses to transfer risk to another party. Like insurance contracts and derivatives, the use of these contracts also is pervasive in risk management. For example, businesses that engage independent contractors to perform some task routinely enter into contracts, commonly known as hold harmless and indemnity agreements, that re-quire the contractor to protect the business from losing money from lawsuits that might arise if persons are injured by the contractor. Internal Risk Reduction In addition to loss financing methods that allow businesses and individuals to reduce risk by transferring it to another entity, businesses can reduce risk internally. There are two ma-jor forms of internal risk reduction: 2. investment in information. Regarding the first of these, firms can reduce risk internally by diversifying their activities (i.e., not putting all of their eggs in one basket). You will learn the basics of how diversifi-cation reduces risk in Chapter 4. Individuals also routinely diversify risk by investing their savings in many different stocks. The ability of shareholders to reduce

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

risk through portfo-lio diversification is an important factor affecting insurance and hedging decisions of firms. The second major method of reducing risk internally is to invest in information to ob-tain superior forecasts of expected losses. Investing in information can produce more ac-curate estimates or forecasts of future cash flows, thus reducing variability of cash flows around the predicted value. Examples abound, including estimates of the frequency and severity of losses from pure risk, marketing research on the potential demand for different products to reduce output price risk, and forecasting future commodity prices or interest rates. One way that insurance companies reduce risk is by specializing in the analysis of data to obtain accurate forecasts of losses. Medium-to-large businesses often find it advan-tageous to reduce pure risk in this manner as well Given the large demand for accurate fore-casts of key variables that affect business value and determine the price of contracts that can be used to reduce risk (such as insurance and derivatives), many firms specialize in pro-viding information and forecasts to other firms and parties. Business Risk Management Organisation Where does the risk management function fit within the overall organizational structure of businesses? In general, the views of senior management concerning the need for, scope, and importance of risk management and possible administrative efficiencies determine how the risk management function is structured and the exact responsibilities of units devoted to risk management. Most large companies have a specific department responsible for managing pure risk that is headed by the risk manager (or director of risk management). However, given that losses can arise from numerous sources, the overall risk management process ide-ally reflects a coordinated effort between all of the corporations major departments and business units, including production, marketing, finance, and human resources. Depending on a companys size, a typical risk management department includes vari-ous staff specializing in areas such as property-liability insurance. Workers compensa-tion, safety and environmental hazards, claims management, and, in many cases, employee benefits. Given the complexity of modem risk management, most firms with significant exposure to price risk related to the cost of raw materials, interest rate changes, or changes in foreign exchange rates have separate departments or staff members that deal with these risks. Whether there will be more movement in the future toward combining the management of these risks with pure risk management within a unified risk management depart-ment is uncertain.

of scale in arranging loss financing. Moreover, many risk management decisions are strategic in nature, and centralization facilitates effective interaction between the risk manager and senior management. A possible limitation of a centralized risk management function is that it can reduce con-cern for risk management among the managers and employees of a firms various operat-ing units. However, allocating the cost of risk or losses to particular units often can improve incentives for unit managers to control costs even if the overall risk management function is centralized. On the other hand, there are advantages to decentralizing certain risk man-agement activities, such as routine safety and environmental issues. In these cases, operat-ing managers are close to the risk and can deal effectively and directly with many issues. Summary
The term risk broadly refers to situations where outcomes are

Do cu
6

In most firms, the risk management function is subordinate to and thus reports to the fi-nance (treasury) department. This is because of the close relationships between protecting assets from loss, financing losses, and the finance function. However, some firms with sub-stantial liability exposures have the risk management department report to the legal de-partment. A smaller proportion of firms have the risk management unit report to the human resources department. Firms also vary in the extent to which the risk management function is centralized, as opposed to having responsibility spread among the operating units. Centralization may achieve possible economies

ww Co w.p m dfw P iza D rd. F com Tr i


financing, and internal risk reduction. Notes:
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

uncertain. Risk often refers specifi-cally to variability in outcomes around the ex-pected value. In other cases, it refers to the expected value (e.g., the expected value of losses). Regardless of the specific notion of risk being used, risk is costly. Major types of business risk that produce fluctu-ations in business value include price risk, credit risk, and pure risk.
Pure risk encompasses risk of loss from (1) dam-age to and

theft or expropriation of business as-sets, (2) legal liability for injuries to customers and other parties, (3) workplace injuries to em-ployees, and (4) obligations assumed by busi-nesses under employee benefit plans. Pure risk frequently is managed in part by the purchase of insurance to finance losses and reduce risk. and indirect losses, (2) evaluation of their potential frequency and severity, (3) de-velopment and selection of methods for manag-ing risk to maximize business value, (4) implementation of these methods, and (5) ongoing monitoring.

Risk management involves (1) identification of potential direct

Major risk management methods include loss con-trol, loss Loss control reduces expected losses by lowering the level of

risky activity and/or increasing pre-cautions against loss for any given level of risky activity. insurance, hedging, and other con-tractual risk transfers.

Loss financing methods include retention (se1f-insurance), Many businesses achieve internal risk reduction through

diversification and through investments in information to improve forecasts of expected cash flows. Most large corporations have a specific depart-ment, headed by the risk manager, that is devoted to the management of pure risk and, in some cases, other types of .

al

11D.571.3

LESSON 2: RISK MANAGEMENT OBJECTIVE & COST OF RISK


Chapter Objective
Define and explain the overall objective of risk management. Explain the cost of risk concept. Explain how minimizing the cost of risk maximizes business

UNIT I CHAPTER 2 OBJECTIVE OF RISK MANAGEMENT

value.
Discuss possible conflicts between business and societal

objectives. The Need for a Risk Management objective Risk refers to either variability around the expected value or, in other contexts, the expected value of losses. Holding all else equal, both types of risk-variability and expected losses are costly (i.e., they generally reduce the value of engaging in various activities). At a broad level, risk management seeks to mitigate this re-duction in value and thus increase welfare. We begin this chapter with two simple examples to illustrate how risk management can increase value: (1) the risk of product liability claims against a pharmaceutical company, and (2) the risk to individuals associated with automo-bile accidents.

Consider first a pharmaceutical company that is developing a new prescription drug for the treatment of rheumatoid arthritis, a crippling disease of the joints. The risk of adverse health reactions to the drug and thus legal liability claims by injured users could be sub-stantial. The possibility of injuries, which cause the firm (and/ or its liability insurer) to de-fend lawsuits and pay damages, will increase the businesss expected costs. Loss control, such as expenditures on product development and safety testing that reduce expected legal defense costs and expected damage payments also would be costly. If the firm purchases liability insurance to finance part of the potential losses, the pre-mium paid will include a loading to cover the insurers administrative costs and provide a reasonable expected return on the insurers capital. The possibility of uninsured damage claims (self-insured losses or losses in excess of liability insurance coverage limits) will create 1illcertainty about the amount of costs that will be incurred in any given period.

Do cu

Most and perhaps all of these factors can increase the price that the firm will need to charge for the drug, thus reducing demand. For a given price, the risk of injury also might discourage some doctors from prescribing the drug. The risk of injury also might cause the firm and the medical profession to distribute the drug only to the most severe cases of the disease, or the firm might even decide not to introduce the drug. As a result, from the com-panys perspective, the risk of consumer injury could have a significant effect on the value of introducing the drug. Now consider the risk that you will be involved in an auto accident, which could cause physical harm to you and your vehicle, as well as exposing you to the risk of a lawsuit for harming someone else. The possibility of being involved in an accident reduces the value of driving. Other things being equal, people obviously would prefer to have a lower likeli-hood of accident. But other things are
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Understanding the Cost of Risk
Copy Right: Rai University

In addition to the component needed to pay losses, auto and health insurance premiums must again include a loading for the insurers administrative costs and provide a reasonable expected return on the insurers capital. Even with insurance, you face some uncertainty about the cost of losses that are less than your deductible (or for liability losses greater than policy limits). You also are exposed to uninsured indirect losses that arise from accidents, such as the time lost in getting your car repaired and submitting a claim to your insurer. Risk is costly and so is the management of risk. We therefore need some guiding principles to de-termine how much and what types of risk management should be pursued. That is, we need to identify the underlying objective of risk management. The guiding principle or fundamental objective of risk management is to minimize the cost of risk. When we consider business risk management decisions, the objective is to min-imize the firms cost of risk. When we consider individual risk management, the objective is to minimize the individuals cost of risk. And, if we consider public policy risk manage-ment decisions, the objective is to minimize societys cost of risk. Most risk management decisions must be made before losses are known. The magnitude of actual losses during a given time period can be determined after the fact (i.e., after the number and severity of accidents are known). Before losses occur, the cost of direct and indirect losses reflects the predicted or expected value of losses during an upcoming time period. Thus, the cost of losses can be determined ex post (after the fact) and estimated ex ante (before the fact). Most risk management decisions must be based on ex ante estimates of the cost of losses and thus the cost of risk. Components of the Cost of Risk Regardless of the type of risk being considered, the cost of risk has five main components. For concreteness, we discuss these components from a business perspective for the case of pure risk. Using the ex ante perspective, the cost of pure risk includes: (1) expected losses, (2) the cost of loss control, (3) the cost of loss financing, (4) the cost of internal risk re-duction, and (5) the cost of any residual uncertainty that remains after loss control, loss fi-nancing, and internal risk reduction methods have been implemented. Figure 2.1 summarizes these five components.

al

not equal. Safety equipment included in vehicles usu-ally increases their price. Attempting to reduce the likelihood of injury by driving less also can be costly. You either must stay home or take alternative transportation that may not be as attractive as driving (apart from the risk of accident). Driving more safely usually means taking more time to get places, or it requires greater concentration, which means you can-not think as much about other things while you are behind the wheel.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

In the case of the pharmaceutical company discussed earlier, the expected cost of direct losses would include the expected cost of liability settlements and defense. The expected cost of indirect losses would include items such as (1) the expected cost of lost profit if sales had to be reduced due to adverse liability experience, (2) the expected cost of product recall expenses, and (3) the expected loss in profit on any investments that would not be under-taken if large liability losses were to deplete the firms internal funds available for invest-ment and increase the cost of borrowing or raising new equity.

Do cu
Cost of Loss Control
8

The cost of loss control reflects the cost of increased precautions and limits on risky activ-ity designed to reduce the frequency and severity of accidents. For example, the cost of loss control for the pharmaceutical company would include the cost of testing the product for safety prior to its introduction and any lost profit from limiting distribution of the product in order to reduce exposure to lawsuits. Cost of Loss Financing The cost of loss financing includes the cost of self-insurance, the loading in insurance premiums, and the transaction costs in arranging, negotiating, and enforcing hedging arrangements and other contractual risk transfers. The cost of self-insurance includes the cost of maintaining reserve funds to pay losses. This cost in turn includes taxes on income from investing these funds, as well as the possible opportunity cost that can occur if main-taining reserve funds reduces the ability of a business to undertake profitable investment opportunities.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Expected Loss

Cost of Loss Control

Cost of Loss Financing

Cost of Internal Risk Reduction

Cost of Residual Uncertainty

Direct Losses

Increased Precautions

Retention & Self Insurance Cost


Insurance

Div ersification

Effects on Shareholders

Note that when losses are insured, the cost of loss financing through insurance only re-flects the loading in the policys premium for the insurers administrative expenses and re-quired expected profit. The amount of premium required for the expected value of insured losses is included in the firms expected cost of losses. Cost of Internal Risk Reduction Methods Insurance, hedging, other contractual risk transfers, and certain types of loss control can re-duce the uncertainty associated with losses; that is, these risk management methods can make the cost of losses more predictable. Uncertainty also can be reduced through diversification and investing in information to obtain better forecasts of losses. The cost of internal risk reduction includes transaction costs associated with achieving diversification and the cost associated with managing a diversified set of activi-ties. It also includes the cost of obtaining and analyzing data and other types of information to obtain more accurate cost forecasts. In some cases this may involve paying another firm for this information; for example, the pharmaceutical company may pay a risk management consultant to estimate the firms expected liability costs. Cost of Residual Uncertainty Uncertainty about the magnitude of losses seldom will be completely eliminated through loss control, insurance, hedging, other contractual risk transfers, and internal risk reduction. The cost of uncertainty that remains (that is left over) once the firm has selected and im-plemented loss control; loss financing, and internal risk reduction is called the cost of resid-ual uncertainty. This cost arises because uncertainty generally is costly to risk-averse individuals and investors. For example, residual uncertainty can affect the amount of com-pensation that investors require to hold a firms stock. Residual uncertainty also can reduce value through its effects on expected net cash flows. For example, residual uncertainty might reduce the price that customers are willing to pay for the firms products or cause managers or employees to require higher wages (e.g., the top managers of the pharmaceutical company could require higher pay to com-pensate them for uncertainty associated with product liability claims). Cost Tradeoffs A number of tradeoffs exist among the components of the cost of risk. The three most im-portant cost tradeoffs are those between: (1) the expected cost of direct/indirect losses and loss control costs, (2) the cost of loss financing/internal risk reduction and the expected cost of indirect losses, and (3) the cost of loss financing/internal risk reduction and the cost of residual uncertainty. First, a tradeoff normally exists between expected losses (both direct and indirect) and loss control costs. Increasing loss control costs should reduce ex-pected losses. In the case of the pharmaceutical company, for example, expenditures on developing a safer drug will reduce the expected cost of liability suits. Ignoring for simplicity the possible effects of loss control on other components of the cost of risk (such as the cost of residual uncertainty), minimizing the cost of risk requires the firm to invest in loss con-trol until the marginal benefit-in the

Indirect Losses

Reduces Activity

Investments in information Hedging

Effects on other Stakeholders

Other Risk Transfers

Fig 2.1: Components of the cost of risk Expected Cost of Losses The expected cost of losses includes the expected cost of both direct and indirect losses. As you learned in the last chapter, major types of direct losses include the cost of repairing or replacing damaged assets, the cost of paying workers compensation claims to injured workers, and the cost of defending against and settling liability claims. Indirect losses include reductions in net profits that occur as a consequence of direct losses, such as the loss of normal profits and continuing and extra expense when production is curtailed or stopped due to direct damage to physical assets. In the case of large losses, indirect losses can in-clude loss of profits from forgone investment and, in the event of bankruptcy, legal expenses and other costs associated with reorganizing or liquidating a business.

al

11D.571.3

form of lower expected costs resulting from direct and indirect lossesequals the marginal cost of loss control. The amount of loss control that minimizes the cost of risk generally will not involve eliminating the risk of loss. It will not produce a world in which buildings never bum, workers are never hurt, and products never harm cus-tomers because reducing the probability of loss to zero would be too costly. Beyond some point, the cost of additional loss control exceeds the reduction in the expected cost of losses (that is, the marginal cost exceeds the marginal benefit) so that additional loss control will increase the cost of risk. Eliminating the risk of loss will not minimize the cost of risk for either businesses or society. Even if it were technologically feasible to eliminate the risk of harm, people would not want to live in such a world. It simply would be too expensive. To use an absurd example to prove this point, injuries from automobile accidents might be virtually eliminated if auto-mobiles were simply tanks without weapons. But very few people could afford to drive a tank, and those who could would rather risk injury and get to their destination more quickly with a pickup or luxury sports sedan. Because loss control is costly, a point is reached where people prefer some risk of harm to paying more for goods and services or incurring other costs to reduce risk.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

the cost of pure risk is applicable to other types of risk. To illustrate, we will briefly discuss the risk of input price changes, using the specific example of a manufacturer that uses oil in its production process. In this case, the prices charged for the firms products generally would not immediately adjust to reflect changes in the price of oil so that the firms profits will be affected by oil price changes. Oil price increases will cause the firms profits (or net cash flows) to decline in the short run, and oil price decreases will lead to a short-run in-crease in profits. From an ex ante perspective, the expected cost of oil is aalogous to the expected cost of direct losses from pure risk, such as those associated with product liability claims against the pharmaceutical company. Ex post, the actual cost of oil price changes can differ from what was expected, just as the actual costs from product liability claims can differ from those expected. If costs are greater than expected, then profits will be lower than expected in both cases. However, because oil is an integral input to the production process for which ongoing expenditures are routinely expected, the expected cost of oil normally would not be considered as part of the cost of risk. (Similarly, while wages paid to employees can dif-fer from what is expected, the expected cost of wages normally would not be considered as part of the cost of risk.) Large increases in the price of oil could cause indirect costs if, for example, production is reduced, alternative sources of energy need to be arranged, or profitable investment is curtailed. The possibility of indirect costs increases the expected cost of using oil in the production process. Expenditures on loss control, such as redesigning the production process to allow for the substitution of other sources of energy, would decrease the expected cost of oil use and indirect losses. With regard to loss financing, the manufacturer might choose to reduce its exposure to the risk of oil price changes with futures contracts. The ap-propriate use of futures will produce a profit if oil prices increase, thus offsetting all or part of the loss to the firm. (If oil prices drop, all or part of the gain that the firm otherwise would experience will be offset by a loss on its futures contracts.) However, the use of futures con-tracts involves transaction costs that are analogous to the loading in insurance premiums. The firm also might engage in internal risk reduction by diversifying its activities to reduce the sensitivity of its profits to oil price changes or by investing in information to obtain better forecasts of oil prices. You can see from this simple example that the cost of risk concept illustrated in Figure 2.1 is quite general. This concept provides a useful way of thinking about and evaluating all types of risk management decisions. Firm Value Maximization and the Cost of Risk Determinants of Value A businesss value to shareholders depends fundamentally on the expected magnitude, timing, and risk (variability) associated with future net cash flows (cash inflows minus cash outflows) that will be available to provide shareholder-s with a re-turn on their investment. Business value and the effects of risk on value reflect an ex ante perspective: Value de-pends on expected future net cash flows and risk associated with these cash flows. Cash in-flows primarily result

The second major tradeoff among the components of the cost of risk is the tradeoff be-tween the costs of loss financing/internal risk reduction and the expected cost of indirect losses. As more money is spent on loss financing/internal risk reduction, variability in the firms cash flows declines. Lower variability reduces the probability of costly bankruptcy and the probability that the firm will forgo profitable investments as a result of large unin-sured losses. As a result, the expected cost of these indirect losses declines. This tradeoff between the costs of loss financing/ internal risk reduction and the expected cost of indirect losses is of central importance in understanding when firms with diversified shareholders will purchase insurance or hedge. The third major tradeoff is that which often occurs between the costs of loss financ-ing/internal risk reduction and the cost of residual uncertainty. For example, if the firm in-curs higher loss financing costs by purchasing insurance, residual uncertainty declines. Greater and more costly internal risk reduction also reduces residual uncertainty. Concept Checks 1. For an airline, describe the most important components of the cost of risk that arise from the risk of plane crashes. 2. How might the risk of crashes be eliminated by the airline, if at all?

Do cu
11D.571.3

3. Assume that you want to fly across the country and that for a price of $400 the proba-bility of a fatal crash is one in a million trips. To reduce this probability to one in 1.5 mil-lion trips, the price of a ticket would increase to $800. Would you be willing to pay the extra $400? Cost of Other Types of Risk We illustrated the cost of risk concept using a business perspective and analyzing pure risk. However, the cost of risk is a general concept. With some modification, our discussion of

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

from sales of goods and services. Cash outflows primarily arise from the production of goods and services (e.g., wages and salaries, the cost of raw materials, in-terest on borrowed funds, and liability losses). Increases in the expected size of net cash flows increase business value; decreases in expected net cash flows reduce value. The tim-ing of cash flows affects value because a dollar received today is worth more than a dollar received in the future. Because most investors are risk averse, the risk of cash flows reduces the price that they are willing to pay for the firms stock and thus its value (provided that this risk cannot be eliminated by investors holding a diversified portfolio of investments). For a given level of expected net cash flows, this reduction in the firms stock price due to risk increases the expected return from buying the stock. In other words, the variation in net cash flows causes investors to pay less for the rights to future cash flows, which increases the expected return on the amount that they invest. Thus, a fun-damental principle of business valuation is that risk reduces value and increases the ex-pected return required by investors. The actual return to investors in any given period will depend on realizations of net cash flows during the period and new information about the expected future net cash flows and risk. Maximizing Value by Minimizing the Cost of Risk Unexpected increases in losses that are not offset by cash inflows from insurance con-tracts, hedging arrangements, or other contractual risk transfers increase cash outflows and often reduce cash inflows, thus reducing the value of a firms stock. The effects of risk and risk management on firm value before losses are known reflect their in-fluence on (1) the expected value of net cash flows and (2) the compensation required by shareholders to bear risk. Much of basic financial theory deals with the kind of risk for which investors demand compensation and the amount of compensation required. We will have more to say about how risk affects expected cash flows, risk, and required compen-sation in later chapters. For now, it is sufficient for you to understand that making risk man-agement decisions to maximize business value requires an understanding of how risk and risk management methods affect (1) expected net cash flows and (2) the compensation for risk that is required by shareholders.

costs. We emphasize that this value is entirely hypothetical because risk is in-herent in real-world business activities. To illustrate the cost of risk, consider the product liability example introduced earlier. For the pharmaceutical company, the value of the firm without risk is the hypothetical value that would arise if (1) it were impossible for the drug to hurt consumers and thus produce lawsuits and (2) the firm did not have to incur any cost to achieve this state of riskless bliss. The reality of injury risk and the costs of loss control give rise to risk-related costs, thus re-ducing the value of the business. Equation 2.2 implies that if the firm seeks to maximize value, it can do so by minimiz-ing the cost of risk. It accomplishes this by making the reduction in value due to risk as small as possible. Thus, as long as costs are defined to include all the effects on value of risk and risk management, minimizing the cost of risk is the same thing as maximizing value. Why bother introducing the cost of risk instead of just talking about value maximiza-tion? First, the cost of risk concept helps focus attention on and facilitates categorization of the major ways that risk reduces value. Second, the concept is used extensively in prac-tice (although its breadth is sometimes narrower, as is noted below). Measuring the Cost of Risk In order to maximize business value by minimizing the cost of risk, businesses ideally will estimate the size of the various components of the cost of risk and consider how the firms operating and risk management decisions will affect these costs. However, in prac-tice, the necessary analysis is costly. Moreover, some of the components are particularly difficult to measure. Examples include the estimated cost of forgone activity (e.g., profits that would have been achieved but for risk and the reduction inactivity), the impact of de-cisions on customers or suppliers, and the cost of residual uncertainty. As a result of these practical limitations, businesses often will not attempt to quantify all of their costs precisely. Small businesses especially are unlikely to measure costs with much precision because the cost of analysis is usually large compared to the potential benefit in the form of improved decisions. However, even when quantifying the various components of the cost of risk is not cost-effective, managers need to understand these components and the general ways in which their magnitude will be affected by risk management. This un-derstanding is necessary for making informed decisions using intuitive and subjective as-sessments of the effects of decisions on costs. Subsidiary Goals While the overall objective of risk management is to maximize business value to share-holders by minimizing the cost of risk, a variety of subsidiary goals is used to guide day to day decision making. Examples of these subsidiary goals include making insurance decisions to keep the realized cost of uninsured losses below a specified percent of rev-enues, purchasing insurance against any loss that could be large enough to seriously disrupt operations, making decisions to comply with stipulations in loan contracts on the types and amounts of insurance that must be purchased, and spending money on loss control when the savings on insurance premiums are sufficient to outweigh

Do cu
10

If the firms cost of risk is defined to include all risk-related costs from the perspective of shareholders, a business can maximize its value to shareholders by minimizing the cost of risk. To see this more clearly, we define: Cost of risk = Value without risk - Value with risk (2.1) Writing this expression in terms of the firms value to shareholders in the presence of risk gives: Value with risk = Value without risk - Cost of risk (2.2) The value of the firm without risk is a hypothetical and abstract concept that is nonetheless very useful. It equals the hypothetical value of the business in a world in which uncertainty associated with net cash flows could be eliminated at zero cost. This hypothetical value re-flects the magnitude and timing of future net cash flows that would occur without risk and risk-related

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

the costs. These types of rules generally can be viewed as a means to an end (i.e., as practical guides to increasing busi-ness value). However, in each case, there should be a reasonably clear link between the par-ticular goal and the increase in value. Objectives for Nonprofit Firms How does the overall objective of risk management differ for nonprofit or government en-tities that do not have shareholders? Nonprofit firms can be viewed as attempting to maxi-mize the value of products or services provided to various customers and constituents (e.g., taxpayers or persons that donate money to finance the firms operations), where value de-pends on the preferences of these parties. If the cost of risk is defined as the reduction in value of the nonprofit firms activities due to risk, the appropriate goal of risk management remains minimization of the cost of risk to those constituents.

is needed on insurance, loss control, or other methods of reducing the likelihood of financial distress. From a normative perspective (i.e., from the perspective of how people or businesses should behave), managers are agents of shareholders and therefore should seek to maximize value. As a practical matter, a number of factors give managers strong incentives not to deviate too much from value maximization, thus re-ducing agency costs: 1. Managers often are compensated in part with bonuses linked to the firms profitability (and thus in-directly to its stock price), or with stock or stock op-tions that directly increase managers personal wealth when the firms stock price increases. These performance-based compensation systems provide a direct incentive for value maximization. Poor perfor-mance by managers also can reduce their prospects for achieving employment with other firms (it can re-duce. their value in the managerial labor market). 2. Failing to maximize the value of the firms stock makes it more likely that another firm or parties that can then replace current top management with managers that will take ac-tions to increase firm value will acquire the firm. 3. If failure by managers to control costs, including the cost of risk, increases the price or reduces the qual-ity of the firms products, the firm will lose sales to firms with managers who are more inclined to con-trol costs and increase value. This outcome makes it more likely that managers will be replaced and/or that the managers salaries will be lower than if they maximized value. 4. Many firms have stockholders with large stakes and other stakeholders (such as lenders) that routinely monitor managerial performance. 5. Indian laws and the legal liability system impose fi-duciary duties on managers. Failing to act in the in-terest of shareholders can give rise to lawsuits against managers and potential legal liability. Individual Risk Management and the Cost of Risk The cost of risk concept also applies to individual risk management decisions. For exam-ple, when choosing how to manage the risk of automobile accidents, an individual would consider the expected losses (both direct and indirect) from accidents, possible loss control activities (such as driving less at night) and the cost of these activities, loss financing alter-natives (amount of insurance coverage) and the cost of these alternatives, and the cost and benefits of gathering information (e.g., about the weather and road conditions). In addition, an individual would consider the cost of any residual uncertainty, which depends on that persons attitude toward risk (uncertainty). The amount of risk management undertaken by individuals depends in part on their de-gree of risk aversion. A person is risk averse if when having to decide between two risky alternatives that have the same expected outcome, the person chooses the alternative whose outcomes have less variability. This example illustrates the concept of risk aversion: Sup-pose that you must choose between the following alternatives. With alternative A, you have a 50 percent chance of winning $100 and a 50 percent

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Minimizing the cost of risk for a nonprofit firm may involve giving greater weight to certain factors than would be true for a for-profit firm. A nonprofit hospital, for example, might place greater emphasis on the adverse effects of large losses on its customers than would a for-profit firm. However, while the details may differ, the overall objective of risk management and the key decisions that must be made by nonprofit firms are similar to those for for-profit firms. Nonprofit firms need to identify how risk reduces the net value of services provided and make decisions with the goal of minimizing the cost of risk. They have to consider the same basic components of the cost of risk as for-profit firms. It is not clear whether the absence of shareholders and the possibly fewer penalties for failing to minimize costs make agency costs (see Article 2.1) greater for nonprofit firms than for for-profit firms, or, if so, whether this affects risk management. Article 2.1: Will Managers Maximize Value? Owner-managers (e.g., sole proprietors, managing partners, and owner-managers of corporations without publicly traded common stock) have a clear incentive to operate their businesses to achieve their own inter-ests. This generally will involve value maximization pro-vided that value is appropriately defined to reflect the owners attitude toward risk and their ability to diversify their risk of ownership.

Do cu
11D.571.3

One of the longest and most thoroughly debated subjects in business economics and finance is whether managers of large corporations with widely held common stock (i.e., with large numbers of shareholders that are not involved in management) will diligently strive to maximize value to shareholders. The ownership and management functions are separated in businesses with widely held common stock. Managers can be viewed as, agents of shareholders. Managers may have incentives to take actions that benefit themselves at a cost to share-holders, thus failing to maximize shareholder wealth. The costs associated with these actions, including the costs incurred by shareholders in monitoring managerial behavior, are broadly referred to as agency costs.

Agency costs reduce business value. In the context of risk management, managers being excessively cautious might manifest agency costs. Because man-agers could be seriously harmed by financial distress of the firm, they might spend more money than

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

11

chance of losing $100. With alter-native B, you have a 50 percent chance of winning $10,000 and a 50 percent chance of los-ing $10,000. Both gambles have an expected value equal to zero, but alternative As outcomes have less variability (i.e., they are closer to the expected outcome). Stated more simply, most would agree that alternative B is riskier than A. Thus, if you choose alterna-tive A, you are risk averse. If you choose alternative B, you would be called risk loving; and if you are indifferent between the two, you are risk neutral. As mentioned earlier, most people are averse to risk. Risk-averse people generally are willing to pay to reduce risk, or must be compensated for taking on risk. For example, risk -averse people buy insurance to reduce risk. Also, risk-averse people require higher expected returns to invest in riskier securities. The degree of risk aversion can vary across people. If Mary is more risk averse than David, then Mary would likely purchase more insurance than David, all else being equal. Risk Management and Societal Welfare From a societal perspective, the key question is how risky activities and risk management by individuals and businesses _an best be arranged to minimize the total cost of risk for society. This cost is the aggregate-for all members of society-of the costs of losses, loss control, loss financing, internal risk reduction, and residual uncertainty. Minimizing the to-tal cost of risk in society would maximize the value of societal resources.

Similar issues arise within the context of risk. An important example is the effect of government regulations that cause insurance premium rates for some buyers to differ from the expected costs of providing them coverage. By changing how the total cost of risk is divided (or how the total cost pie is sliced), these regulations can alter incentives in ways that increase the total cost of risk (e.g., by encouraging too much risky activity by individuals whose insurance premiums are subsidized). While many persons might argue that these regulations produce a fairer distribution of costs, they nonetheless involve some increase in cost. It is reasonable to assume that individuals, acting privately, will make risk management decisions that minimize their own cost of risk. Similarly, businesses that seek to maximize value to shareholders will make risk management decisions to minimize the cost of risk to the business. The question arises: Will minimizing the cost of risk to the business or indi-vidual minimize the cost of risk to society? Note first that maximizing business value by minimizing the cost of risk generally will involve some consideration of the effects of risk management decisions on other major stakeholders in the firm. As suggested above and explained in detail in later chapters, the firms value to shareholders and the reduction in value due to the cost of risk will depend in part on how risk and risk management affect employees, customers, suppliers, and lenders. The basic reason is that risk and its management affect the terms at which these parties are willing to contract with the business. For example, other things being equal, businesses that expose employees to obvious safety hazards will have to pay higher wages to attract employees. This provides some incentive for the firm to improve safety conditions in order to save on wages (apart from any legal requirement for the firm to pay for injuries). Unfortunately, because we do not live in a perfect world, the goal of making money for shareholders can lead to risk management decisions that may not necessarily minimize the total cost of risk to society. In order for business value maximization to minimize the total cost of risk to society, the business must consider all societal costs in its decisions. In other words, all social costs should be internalized by the business so that its private costs equal social costs. If the private cost of risk (the cost to the business) differs from the social cost of risk (the total cost to society), business value maximization generally will not minimize the total cost of risk to society. A few simple examples should help to illustrate the increase in the social cost of risk that can arise when the private cost is less than the social cost. To illustrate the point simply, as-sume that there is no government regulation of safety, no workers compensation law, and no legal liability system that allows persons to recover damages from businesses that cause them harm. Under this assumption, businesses that seek to maximize value to shareholders may not consider possible harm to persons from risky activity. It would be very likely that many businesses would make decisions without fully reflecting upon their possible harm to strangers (persons with no connection to the business). In addition, businesses would tend to produce products that are too risky and expose workers to an excessive risk of workplace injury given the social cost if consumers and workers underestimate the risk of injury. Note in contrast that if consumers
11D.571.3

Minimizing the total cost of risk for society produces an efficient level of risk. Effi-ciency requires individuals and businesses to pursue activities until the marginal benefit equals the marginal cost, including risk-related costs. Expressed in terms of the cost of pure risk, efficiency requires that loss control, loss financing, and internal risk reduction be pursued until the marginal reduction in the expected cost of losses and residual un-certainty equals the marginal cost of these risk management methods. As was discussed earlier, however, achieving the efficiency goal does not eliminate losses because it is sim-ply too costly to do so.

Do cu
12

While the efficiency concept is abstract and the benefits and costs of risk management are often difficult to measure, the efficiency goal is nonetheless viewed as appropriate by many people (especially economists). The main reason for this is that maximizing the value of resources by minimizing the cost of risk makes the total size of the economic pie as large as possible. Other things being equal, this permits the greatest number of economic needs to be met.

Greater total wealth allows greater opportunity for governments to transfer income from parties that are able to pay taxes to parties that need assistance. A fundamental problem that affects these transfers, however, is that the size of the economic pie is not invariant to how it is sliced (i.e., divided among the population). High marginal tax rates, for example, dis-courage work effort beyond some point, thus tending to reduce the size of the economic pie. Thus, attempts to produce a more equal distribution of income generally involve some re-duction in economic value. The goal is to achieve the right balance between the amount of total wealth and how it is distributed.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

and workers can accurately assess the risk of injury, they can influence the business to consider the risk of harm by reducing the price they are willing to pay for pr9ducts and increasing the wages demanded in view of the risk of injury. For now, it is sufficient to note that a major function of li-ability and workplace injury law is to get businesses to reflect more upon the risk of harm to consumers, workers, and other parties in making their decisions. If legal rules are de-signed so that private costs are approximately equal to social costs, then valuemaximizing decisions by businesses will help to minimize the total cost of risk. in society. Efficient le-gal rules are those that achieve this goal. .
The overall objective of risk management is to minimize the

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

In the context of business risk management, maximizing firm

value is equivalent to minimizing the cost of risk.

Loss control reduces the expected cost of losses. Beyond some

point, the cost of additional loss control will exceed the reduction in the expected cost of losses. As a result, minimizing the cost of risk will not eliminate completely the risk of loss. If it were feasible, eliminating the risk of loss would be excessively costly to businesses and consumers alike.
Loss financing and internal risk reduction re-duce risk and

therefore can reduce both the ex-pected cost of indirect losses, and the cost of residual uncertainty.

The overall objective of risk management for nonprofit firms

also should be to minimize the cost of risk, provided that the special objectives and circumstances of these firms are incorporated into the cost of risk.

Do cu
Notes:
11D.571.3

The overall objective of risk management for indi-viduals can

be viewed as minimizing the cost of risk and thus maximizing the welfare of individuals.

If businesses do not bear the full costs of their risky activities

(that is, if the private cost of risk is less than the social cost), the total cost of risk in society will not be minimized when businesses maximize value. A major function of business liability and workplace injury law is to align private costs with social costs so that business value max-imization will minimize the social cost of risk.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 13

cost of risk. Components of the cost of risk include: (1) the expected cost of losses, (2) the cost of loss con-trol, (3) the cost of loss financing, (4) the cost of internal risk reduction, and (5) the cost of any residual uncertainty that remains after loss con-trol, loss financing, and internal risk reduction methods have been implemented.

al

Summary

LESSON 3 : INTRODUCTION TO INTEGRATED RISK MANAGEMENT


Chapter Objective:
Understanding of Value Creation How Integrated Risk Management creates Value? Defining Optimal Risk Level Implementing Integrated Risk Management

UNIT I CHAPTER 3 INTEGRATED RISK MANAGEMENT & VALUE CREATION


of its risk management. However, even when offering products and services, banks deal in financial assets and are, therefore, by definition in the financial risk business. Additionally, risk management is also perceived in practice to be necessary and critically important to ensure the long-term survival of banks. Not only is a regulatory minimum capital-structure and riskmanagement approach required, but also the customers, who are also liability holders, should and want to be protected against default risk, because they deposit substantial stakes of their personal wealth, for the most part with only one bank. The same argument is used from an economy-wide perspective to avoid bank runs and systemic repercussions of a globally intertwined and fragile banking system. Therefore, we find plenty of evidence that banks do run sophisticated risk-management functions in practice (positive theory for risk management). They perceive risk management to be a critical (success) factor that is both used with the intention to create value and because of the banks concern with lower tail outcomes, that is, the concern with bankruptcy risk. Moreover, banks evaluate (new) transactions and projects in the light of their existing portfolio rather than (only) in the light of the covariance risk with an overall market portfolio. In practice, banks care about the contribution of these transactions to the total risk of the bank when they make capital-budgeting decisions, because of their concern with lower tail outcomes. Additionally, we can also observe in practice that banks do care

Do cu
clients. of risk.
14

The approach typically applied to decide whether a firm creates value is a variant of the traditional discounted cash flow (DCF) analysis of financial theory, with which the value of any asset can be determined. In principle, this multiperiod valuation framework estimates a firms (free) cash flows and discounts them at the appropriate rate of return to determine the overall firm value from a purely economic perspective. However, since a banks liability management does not only have a simple financing function as in industrial corporationsbut is rather a part of a banks business operations, it can create value by itself. Therefore, the common valuation framework is slightly adjusted for banks. It estimates the banks (free) cash flows to its shareholders and then discounts these at the cost of equity capital, to derive the present value (PV) of the banks equitywhich should equal (ideally) the capitalization of its equity in the stock market.

This valuation approach is based on neoclassical finance theory and, therefore, on very restrictive assumptions. Taken to the extreme, in this worldsince only the covariance (i.e., so-called systematic) risk with a broad market portfolio counts14the value of a (new) transaction or business line would be the same for all banks, and the capital-budgeting decision could be made independently from the capital-structure decision. Additionally, any risk-management action at the bank level would be irrelevant for value creation, because it could be replicated/reversed by the investors in efficient and perfect markets at the same terms and, therefore, would have no impact on the banks value. However, in practice, broadly categorized, banks do two things:
They offer (financial) products and provide services to their They engage in financial intermediation and the management

Therefore, a banks economic performance, and hence value, depends on the quality of the provided services and the efficiency

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Introduction to Value Creation in Global Financial Institutions Increased (global) competition among banks1 and the threat of (hostile) takeovers, as well as the increased pressure from shareholders for superior returns has forced bankslike many other companiesto focus on managing their value. It is now universally accepted that a banks ultimate objective function is value maximization. In general, banks can achieve this either by restructuring from the inside, by divesting genuinely valuedestroying businesses, or by being forced into a restructuring from the outside.

Figure 1.1 Integrated view of value creation in banks about their capital structurewhen making capital-budgeting and risk management decisionsand that they perceive holding capital as both costly and a substitute for conducting risk management. Therefore, banks do not (completely) separate risk-management, capital budgeting, and capital-structure decisions, but rather determine the three components jointly and endogenously (as depicted in Figure 1.1). However, this integrated decision-making process in banks is not reflected in the traditional valuation framework as determined by the restrictive assumptions of the neoclassical world. And

al

11D.571.3

therefore it appears that some fundamental links to and concerns about value creation in banks are neglected. Apparently, banks have already recognized this deficiency. Because the traditional valuation framework is also often cumbersome to apply in a banking context, many institutions employ a return on equity (ROE) measure (based on book or regulatory capital) instead. However, banks have also realized that such ROE numbers do not have the economic focus of a valuation framework for judging whether a transaction or the bank as a whole contributes to value creation. They are too accountingdriven, the capital requirement is not closely enough linked to the actual riskiness of the institution, and, additionally, they do not adequately reflect the linkage between capital-budgeting, capitalstructure, and risk-management decisions. To fill this gap, some of the leading banks have developed a set of practical heuristics called Risk-Adjusted Performance Measures (RAPM) or also better known, named after their most famous representative, as RAROC (risk-adjusted return on capital). These measures can be viewed as modified return on equity ratios and take a purely economic perspective. Since banks are concerned about unexpected losses and how they will affect their own credit rating, they estimate the required amount of (economic or) risk capital that they optimally need to hold and that is commensurate with the (overall) riskiness of their (risk) positions. To do that, banks employ a risk measure called value at risk (VaR), which has evolved as the industrys standard measure for lower tail outcomes (by choice or by regulation). VaR measures the (unexpected) risk contribution of a transaction to the total risk of a banks existing portfolio. The numerator of this modified ROE ratio is also based on economic rather than accounting numbers and is, therefore, adjusted, for example, for provisions made for credit losses (so-called expected losses). Consequently, normal credit losses do not affect a banks performance, whereas unexpected credit losses do. In order to judge whether a transaction creates or destroys value for the bank, the current practice is to compare the (single-period) RAPM to a hurdle rate or benchmark return. Following the traditional valuation framework of neoclassical finance theory, this opportunity cost is usually determined by the covariance or systematic risk with a broad market portfolio. However, the development and usage of RAROC, the practical evidence for the existence of risk management in banks (positive theory), and the fact that risk management is also used with the intention to enhance value are phenomena unexplained by and unconsidered in neoclassical finance theory. It is, therefore, not surprising that there has been little consensus in academia on whether there is also a normative theory for risk management and as to whether risk management is useful for banks, and why and how it can enhance value. Therefore, the objective of this book is to diminish this discrepancy between theory and practice by:
Deriving circumstances under which risk management at the

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Evaluating the practical heuristics RAROC and economic capital

as they are currently applied in banks in the light of the results of the prior theoretical discussion.
Developingbased on the theoretical foundations and the

implications from discussing the practical approachesmore detailed instructions on how to conduct risk management and how to measure value creation in banks in practice. In order to achieve these goals, we will proceed in the following way: We will first lay the foundations for the further investigation of the link between risk management and value creation by defining and discussing value maximization as well as risk and its management in a banking context, and establishing whether there is empirical evidence of a link between the two. We will then explore both the neoclassical and the neoinstitutional finance theories on whether we can find rationales for risk management at the corporate level in order to create value. Based on the results of this discussion, we will try to deduce general implications for a framework that encompasses both risk management and value maximization in banks. Using these results, we will outline the fundamentals for an appropriate (total) risk measure that consistently determines the adequate and economically driven capital amount a bank should hold as well as its implications for the real capital structure in banks. We will then discuss and evaluate the currently applied measure economic capital and how it can be consistently determined in the context of a valuation framework for the various types of risk a bank faces. Subsequently, we will investigate whether RAROC is an adequate capital-budgeting tool to measure the economic performance of and to identify value creation in banks. We do so because, on the one hand, RAROC uses economic capital as the denominator and, on the other hand, it is similar to the traditional valuation framework in that it uses a comparison to a hurdle rate. When exploring RAROC, we take a purely economic view and neglect regulatory restrictions that undeniably have an impact on the economic performance of banks. Moreover, we will focus on the usage of RAPM in the context of value creation. We will not evaluate its appropriateness for other uses such as limit setting and capital allocation. We close by evaluating the derived results with respect to their ability to provide more detailed answers on whether and where banks should restructure, concentrate on their competitive advantages or divest, and whether they provide more detailed instructions on why and when banks should conduct risk management from a value creation perspective (normative theory). Integrated Risk Management There are three critical concepts that are cornerstones of the Integrated Risk Management Framework: risk, risk management and integrated risk management. These concepts are elaborated on below. Risk Risk is unavoidable and present in virtually every human situation. It is present in our daily lives, public and private

Do cu
to risk management in banks.
11D.571.3

corporate level can create value in banks.


Laying the theoretical foundations for a normative approach

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

15

To date, no consensus has emerged, but after much research and discussion, the following description of risk has been developed for the federal Public Service in the context of the Integrated Risk Management Framework:

Risk refers to the uncertainty that surrounds future events and outcomes. It is the expression of the likelihood and impact of an event with the potential to influence the achievement of an organizations objectives.

The phrase the expression of the likelihood and impact of an event implies that, as a minimum, some form of quantitative or qualitative analysis is required for making decisions concerning major risks or threats to the achievement of an organizations objectives. For each risk, two calculations are required: its likelihood or probability; and the extent of the impact or consequences.

Do cu
Risk Management
16

Finally, it is recognized that for some organizations, risk management is applied to issues predetermined to result in adverse or unwanted consequences. For these organizations, the definition of risk in the Privy Council Office report 2, which refers to risk as a function of the probability (chance, likelihood) of an adverse or unwanted event, and the severity or magnitude of the consequences of that event will be more relevant to their particular public decision-making contexts. Although this definition of risk refers to the negative impact of the issue, the report acknowledges that there are also positive opportunities arising from responsible risk-taking, and that innovation and risk co-exist frequently. Risk management is not new in the federal public sector. It is an integral component of good management and decision-making at all levels. All departments manage risk continuously whether they realize it or notsometimes more rigorously and systematically, sometimes less so. More rigorous risk management occurs most visibly in departments whose core mandate is to protect the environment and public health and safety. As with the definition of risk, there are equally many accepted definitions of risk management in use. Some describe risk management as the decision -making process, excluding the

ww Co w.p m dfw P iza D rd. F com Tr i


Integrated Risk Management Integrated Risk Management -Towers Perrin
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

sector organizations. Depending on the context, there are many accepted definitions of risk in use. The common concept in all definitions is uncertainty of outcomes. Where they differ is in how they characterize outcomes. Some describe risk as having only adverse consequences, while others are neutral. While this Framework recognizes the importance of the negative connotation of outcomes associated with the description of risk (i.e., risk is adverse), it is acknowledged that definitions are evolving. Indeed, there is considerable debate and discussion on what would be an acceptable generic definition of risk that would recognize the fact that, when assessed and managed properly, risk can lead to innovation and opportunity. This situation appears more prevalent when dealing with operational risks and in the context of technological risks. For example, Government On-Line (GOL) represents an opportunity to significantly increase the efficiency of public access to government services. It is acknowledged in advance that the benefits of pursuing GOL would outweigh, in the long term, potential negative outcomes, which are foreseen to be manageable.

identification and assessment of risk, whereas others describe risk management as the complete process, including risk identification, assessment and decisions around risk issues. For example, the Privy Council Offices report refers to risk management as the process for dealing with uncertainty within a public policy environment For the purposes of the Integrated Risk Management Framework: Risk management is a systematic approach to setting the best course of action under uncertainty by identifying, assessing, understanding, acting on and communicating risk issues. In order to apply risk management effectively, it is vital that a risk management culture be developed. The risk management culture supports the overall vision, mission and objectives of an organization. Limits and boundaries are established and communicated concerning what are acceptable risk practices and outcomes. Since risk management is directed at uncertainty related to future events and outcomes, it is implied that all planning exercises encompass some form of risk management. There is also a clear implication that risk management is everyones business, since people at all levels can provide some insight into the nature, likelihood and impacts of risk. Risk management is about making decisions that contribute to the achievement of an organizations objectives by applying it both at the individual activity level and in functional areas. It assists with decisions such as the reconciliation of science-based evidence and other factors; costs with benefits and expectations in investing limited public resources; and the governance and control structures needed to support due diligence, responsible risk-taking, innovation and accountability. The current operating environment is demanding a more integrated risk management approach. It is no longer sufficient to manage risk at the individual activity level or in functional silos. Organizations around the world are benefiting from a more comprehensive approach to dealing with all their risks. Today, organizations are faced with many different types of risk (e.g., policy, program, operational, project, financial, human resources, technological, health, safety, political). Risks that present themselves on a number of fronts as well as high level, high impact risks demand a co-coordinated, systematic corporate response.

Whatever name they put on itbusiness ... holistic ... strategic ... enterpriseleading organizations around the world are breaking out of the silo mentality and taking a comprehensive approach to dealing with all the risks they face. For the purposes of the Integrated Risk Management Framework: I ntegrated risk management is a continuous, proactive and systematic process to understand, manage and communicate risk from an organization-wide perspective. It is about

al

11D.571.3

making strategic decisions that contribute to the achievement of an organizations overall corporate objectives. Integrated risk management requires an ongoing assessment of potential risks for an organization at every level and then aggregating the results at the corporate level to facilitate priority setting and improved decision-making. Integrated risk management should become embedded in the organizations corporate strategy and shape the organizations risk management culture. The identification, assessment and management of risk across an organization helps reveal the importance of the whole, the sum of the risks and the interdependence of the parts. Integrated risk management does not focus only on the minimization or mitigation of risks, but also supports activities that foster innovation, so that the greatest returns can be achieved with acceptable results, costs and risks. Integrated risk management strives for the optimal balance at the corporate level. An Integrated Risk Management Framework The Integrated Risk Management Framework provides guidance to adopt a more holistic approach to managing risk. The application of the Framework is expected to enable employees and organizations to better understand the nature of risk, and to manage it more systematically.

Element 4: Ensuring Continuous Risk Management Learning


A supportive work environment is established where learning

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

from experience is valued, lessons are shared; Learning plans are built into an organizations risk management practices;
Results of risk management are evaluated to support

innovation, learning and continuous improvement; and


Experience and best practices are shared, internally and across

Four Elements and Their Expected Results The Integrated Risk Management Framework is comprised of four related elements. The elements, and a synopsis of the expected results for each, are presented below. Further details on the conceptual and functional aspects of the Framework are provided in subsequent sections of this document. Element 1: Developing the Corporate Risk Profile scanning; assessed; and

The organizations risks are identified through environmental Current status of risk management within the organization is The organizations risk profile is identified.

Do cu
understood and applied; tools. all levels; ongoing.
11D.571.3

Element 2: Establishing an Integrated Risk Management Function

Management direction on risk management is communicated, Approach to operationalized integrated risk management is

implemented through existing decision-making and reporting structures; and

Capacity is built through development of learning plans and

Element 3: Practicing Integrated Risk Management


A common risk management process is consistently applied at Results of risk management practices at all levels are integrated

into informed decision-making and priority setting;


Tools and methods are applied; and Consultation and communication with stakeholders is

ww Co w.p m dfw P iza D rd. F com Tr i


Technological: new technologies.
Copy Right: Rai University

government. The four elements of the Integrated Risk Management Framework are presented as they might be applied: looking outward and across the organization as well as at individual activities. This comprehensive approach to managing risk is intended to establish the relationship between the organization and its operating environment, revealing the interdependencies of individual activities and the horizontal linkages. Element 1: Developing the Corporate Risk Profile A broad understanding of the operating environment is an important first step in developing the corporate risk profile. Developing the risk profile at the corporate level is intended to examine both threats and opportunities in the context of an organizations mandate, objectives and available resources. In building the corporate risk profile, information and knowledge at both the corporate and operational levels is collected to assist departments in understanding the range of risks they face, both internally and externally, their likelihood and their potential impacts. In addition, identifying and assessing the existing departmental risk management capacity and capability is another critical component of developing the corporate risk profile. An organization can expect three key outcomes as a result of developing the corporate risk profile:
Threats and opportunities are identified through ongoing

internal and external environmental scans, analysis and adjustment.

Current status of risk management within the organization is

assessedchallenges/opportunities, capacity, practices, culture and recognized in planning organization-wide management of risk strategies. The organizations risk profile is identifiedkey risk areas, risk tolerance, ability and capacity to mitigate, learning needs. External and Internal Environment Through the environmental scan, key external and internal factors and risks influencing an organizations policy and management agenda are identified. Identifying major trends and their variation over time is particularly relevant in providing potential early warnings. Some external factors to be considered for potential risks include: Political: the influence of international governments and other governing bodies; Economic: international and national markets, globalization; Social: major demographic and social trends, level of citizen engagement; and

al

17

Internally, the following factors are considered relevant to the development of an organizations risk profile: the overall management framework; governance and accountability structures; values and ethics; operational work environment; individual and corporate risk management culture and tolerances; existing risk management expertise and practices; human resources capacity; level of transparency required; and local and corporate policies, procedures and processes. The environmental scan increases the organizations awareness of the key characteristics and attributes of the risks it faces. These include: Type of risk: technological, financial, human resources (capacity, intellectual property), health, and safety; Source of risk: external (political, economic, natural disasters); internal (reputation, security, knowledge management, information for decision making); What is at risk: area of impact/type of exposure (people, reputation, program results, materiel, real property); and

Level of ability to control the risk: high (operational); moderate (reputation); low (natural disasters). An organizations risk profile identifies key risk areas that cut across the organization (functions, programs, systems) as well as individual events, activities or projects that could significantly influence the overall management priorities, performance, and realization of organizational objectives.

The environmental scan assists the department in establishing a strategic direction for managing risk, making appropriate adjustments in decisions and actions. It is an ongoing process that reinforces existing management practices and supports the attainment of overall management excellence. Assessing Current Risk Management Capacity In assessing internal risk management capacity, the mandate, governance and decision-making structures, planning processes, infrastructure, and human and financial resources are examined from the perspective of risk. The assessment requires an examination of the prevailing risk management culture, risk management processes and practices to determine if adjustments are necessary to deal with the evolving risk environment.

Do cu
18

Furthermore, the following factors are considered key in assessing an organizations current risk management capacity: individual factors (knowledge, skills, experience, risk tolerance, propensity to take risk); group factors (the impact of individual risk tolerances and willingness to manage risk); organizational factors (strategic direction, stated or implied risk tolerance); as well as external factors (elements that affect particular risk decisions or how risk is managed in general). Risk Tolerance An awareness and understanding of the current risk tolerances of various stakeholders is a key ingredient in establishing the corporate risk profile. The environmental scan will identify stakeholders affected by an organizations decisions and actions, and their degree of comfort with various levels of risk. Understanding the current state of risk tolerance of citizens, parliamentarians, interest groups, suppliers, as well as other

ww Co w.p m dfw P iza D rd. F com Tr i


use throughout the organization. Strategic Risk Management Direction
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

government departments will assist in developing a risk profile and making decisions on what risks must be managed, how, and to what extent. It will also help identify the challenges associated with risk consultations and communication. In the Public Service, citizens needs and expectations are paramount. For example, most citizens would likely have a low risk tolerance for public health and safety issues (injuries, fatalities), or the loss of Canadas international reputation. Other risk tolerances for issues such as project delays and slower service delivery may be less obvious and may require more consultation. In general, there is lower risk tolerance for the unknown, where impacts are new, unobservable or delayed. There are higher risk tolerances where people feel more in control (for example, there is usually a higher risk tolerance for automobile travel than for air travel). Risk tolerance can be determined through consultation with affected parties, or by assessing stakeholders response or reaction to varying levels of risk exposure. Risk tolerances may change over time as new information and outcomes become available, as societal expectations evolve and as a result of stakeholder engagement on trade-offs. Before developing management strategies, a common approach to the assessment of risk tolerance needs to be understood organization-wide. Determining and communicating an organizations own risk tolerance is also an essential part of managing risk. This process identifies areas where minimal levels of risk are permissible, as well as those that should be managed to higher, yet reasonable levels of risk. Element 2: Establishing an Integrated Risk Management Function Establishing an integrated risk management function means setting up the corporate infrastructure for risk management that is designed to enhance understanding and communication of risk issues internally, to provide clear direction and demonstrate senior management support. The corporate risk profile provides the necessary input to establish corporate risk management objectives and strategies. To be effective, risk management needs to be aligned with an organizations overall objectives, corporate focus, strategic direction, operating practices and internal culture. In order to ensure risk management is a consideration in priority setting and revenue allocation, it needs to be integrated within existing governance and decision-making structures at the operational and strategic levels. To ensure that risk management is integrated in a rational, systematic and proactive manner, an organization should seek to achieve three related outcomes:
Management direction on risk management is communicated,

understood and appliedvision, policies, operating principles. Approach to operationalized integrated risk management is implemented through existing decision-making structures: governance, clear roles and responsibilities, and performance reporting.
Building capacitylearning plans and tools are developed for

al

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

The establishment and communication of the organizations risk management vision, objectives and operating principles are vital to providing overall direction, and ensure the successful integration of the risk management function into the organization. Using these instruments can reinforce the notion that risk management is everyones business. It is essential that management provides a clear statement of its commitment to risk management and determines the best way to implement risk management in its organization. This includes establishing a corporate focus and communicating internal parameters, priorities, and practices for the implementation of risk management. To reinforce the corporate focus on risk management, organizations may dedicate a small number of resources to provide both advisory and challenge functions, and to specifically integrate these responsibilities into an existing unit (for example, Corporate Planning and Policy, Comptrollership Secretariat, Internal Audit).

The integration of risk management into decision-making is supported by a corporate philosophy and culture that encourages everyone to manage risks. This can be accomplished in a number of ways, such as:
Seeking excellence in management practices, including risk

management; Having senior managers champion risk management;


Encouraging innovation, while providing guidance and

assistance in situations that do not turn out favorably;


Encouraging managers to develop knowledge and skills in

risk management; appraisals;

Introducing incentives and rewards; and

In establishing the strategic risk management direction, internal and external concerns, perceptions and risk tolerances are taken into account. It is also imperative to identify acceptable risk tolerance levels so those unfavorable outcomes can be remedied promptly and effectively. Clear communication of the organizations strategic direction will help foster the creation and promotion of a supportive corporate risk management culture. Objectives and strategies for risk management are designed to complement the organizations existing vision and goals. In establishing an overall risk management direction, a clear vision for risk management is articulated and supported by policies and operating principles. The policy would guide employees by describing the risk management process, establishing roles and responsibilities, providing methods for managing risk, as well as providing for the evaluation of both the objectives and results of risk management practices. Integrating Risk Management into Decision Making Effective risk management cannot be practiced in isolation, but needs to be built into existing decision-making structures and processes. As risk management is an essential component of good management, integrating the risk management function into existing strategic management and operational processes will ensure that risk management is an integral part of day-to-day activities. In addition, organizations can capitalize on existing capacity and capabilities (e.g., communications, committee structures, existing roles and responsibilities, etc.) While each organization will find its own way to integrate risk management into existing decision-making structures, the following are factors that may be considered:
Aligning risk management with objectives at all levels of the

Do cu
organization; and
11D.571.3

Introducing risk management components into existing

strategic planning and operational processes;


Communicating corporate directions on acceptable level of risk; Improving control and accountability systems and processes

to take into account risk management and results.

ww Co w.p m dfw P iza D rd. F com Tr i


Building Organizational Capacity
Copy Right: Rai University

Recruiting on risk management ability as well as experience.

Reporting on Performance The development of evaluation and reporting mechanisms for risk management activities provides feedback to management and other interested parties in the organization and government-wide. The results of these activities ensure that integrated risk management is effective in the long term. Some of these activities could fall to functional groups in the organization responsible for review and audit. Responsibility may also be assigned to operational managers and employees to ensure that information affecting risk that is collected as part of local reporting or practices is incorporated into the environmental scanning process. Reporting could take place through normal management channels (performance reporting, ongoing monitoring, appraisal) as part of the advisory and challenge functions associated with risk management. Reporting facilitates learning and improved decision-making by assessing both successes and failures, monitoring the use of resources, and disseminating information on best practices and lessons learned. Organizations should evaluate the effectiveness of their integrated risk management processes on a periodic basis. In collaboration with departments, the Treasury Board of Canada Secretariat will review the effectiveness of the Integrated Risk Management Framework and make the necessary adjustments to ensure sustained progress in building a risk-smart workforce and environment. Building risk management capacity is an ongoing challenge even after integrated risk management has become firmly entrenched. Environmental scanning will continue to identify new areas and activities that require attention, as well as the risk management skills, processes, and practices that need to be developed and strengthened. Organizations need to develop their own capacity strategies based on their specific situation and risk exposure. The implementation of the Integrated Risk Management Framework will be further supported by the Treasury Board of Canada Secretariat, which, through a center of expertise, will provide overall guidance, advice and share best practices.

al

Including risk management as part of employees performance

19

techniques, practices and processes;

Providing guidance on the application of tools and techniques; Allowing for development and/or the use of alternative tools

and techniques that may be better suited to managing risk in specialized applications; and across the organization.

Adopting processes to ensure integration of risk management

Element 3: Practicing Integrated Risk Management

Implementing an integrated risk management approach requires a management decision and sustained commitment, and is designed to contribute to the realization of organizational objectives. Integrated risk management builds on the results of an environmental scan and is supported by appropriate corporate infrastructure. The following outcomes are expected for practicing integrated risk management:
A departmental risk management process is consistently applied at all

levels, where risks are understood, managed and communicated.

Results of risk management practices at all levels are integrated into

Do cu
and external.
20

informed decision-making and priority settingstrategic, operational, management and performance reporting.

Tools and methods are applied as aids to make decisions.

Consultation and communication with stakeholders is ongoinginternal

A Common Process A common, continuous risk management process assists an organization in understanding, managing and communicating risk. Continuous risk management has several steps. Emphasis on various points in the process may vary, as may the type, rigor or extent of actions considered, but the basic steps are similar. In the exhibits that follow, Exhibit 1 illustrates an example of a continuous risk management process that focuses on an integrated approach to risk management, while Exhibit 2 presents a risk management decision-making process in the context of public policy. Exhibit 1: A Common Risk Management Process

ww Co w.p m dfw P iza D rd. F com Tr i


Risk Identification 1. Identifying Issues, Setting Context Risk Assessment 2. Assessing Key Risk Areas 3. Measuring Likelihood and Impact 4. Ranking Risks developing new criteria and tools. Responding to Risk 5. Setting Desired Results short/long term. 6. Developing Options
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

To build capacity for risk management, there needs to be a focus on two key areas: human resources, and tools and processes at both the corporate and local levels. The risk profile will identify the organizations existing strengths and weaknesses vis--vis capacity. Areas that may require attention include: Human Resources
Building awareness of risk management initiatives and culture; Broadening skills base through formal training including

appropriate applications and tools;


Increasing knowledge base by sharing best practices and

experiences; and Tools and Processes


Developing and adopting corporate risk management tools,

Internal and external communication and continuous learning improve understanding and skills for risk management practice at all levels of an organization, from corporate through to front-line operations. The process provides common language, guides decision-making at all levels, and allows organizations to tailor their activities at the local level. Documenting the rationale for arriving at decisions strengthens accountability and demonstrates due diligence. The common risk management process and related activities are:

Defining the problems or opportunities, scope, context (social,

cultural, scientific evidence, etc.) and associated risk issues.

Deciding on necessary people, expertise, tools and techniques

(e.g., scenarios, brainstorming, checklists). Performing a stakeholder analysis (determining risk tolerances, stakeholder position, attitudes).

Analyzing context/results of environmental scan and

determining types/categories of risk to be addressed, significant organization-wide issues, and vital local issues.

Determining degree of exposure, expressed as likelihood and

impact, of assessed risks, choosing tools. Considering both the empirical/scientific evidence and public context.
Ranking risks, considering risk tolerance, using existing or

Defining objectives and expected outcomes for ranked risks,

al
11D.571.3

Building capacity, capabilities and skills to work in teams.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Identifying and analyzing optionsways to minimize threats

and maximize opportunitiesapproaches, tools. 7. Selecting a Strategy Choosing a strategy, applying decision criteriaresults-oriented, problem/opportunity driven.
Applying, where appropriate, the precautionary approach/

options; decision; implementation of the decision; and evaluation and review of the decision. In this model, several key elements were identified as influencing the public policy environment surrounding risk management:
There is a public element to virtually all government decision-

principle as a means of managing risks of serious or irreversible harm in situations of scientific uncertainty. 8. Implementing the Strategy
Developing and implementing a plan.

making, and it is a central and legitimate input to the process. Uncertainty in science, together with competing policy interests (including international obligations) has led to increased focus on the precautionary approach.
A decision-making process does not occur in isolationthe public

Monitoring and Evaluation 9. Monitoring, Evaluating and Adjusting Learning, improving the decision-making/risk management process locally and organization-wide, using effectiveness criteria, reporting on performance and results.

Organizations may vary the basic steps and supporting tasks most suited to achieving common understanding and implementing consistent, efficient and effective risk management. A focused, systematic and integrated approach recognizes that all decisions involve management of risk, whether in routine operations or for major initiatives involving significant resources. It is important that the risk management process be applied at all levels, from the corporate level to programs and major projects to local systems and operations. While the process allows tailoring for different uses, having a consistent approach within an organization assists in aggregating information to deal with risk issues at the corporate level. Exhibit 2: Risk Management in Public Policy: A Decision-Making Process

Do cu
11D.571.3

Exhibit 2 presents the model, developed by the PCO-led ADM Working Group on Risk Management, which addresses the issue of risk management in the context of public policy development. This model presents a basis for exploring issues of interest to government policy-makers, and provides a context in which to discuss, examine, and seek out interrelationships between issues associated with public policy decisions in an environment of uncertainty and risk (i.e., a model of public risk management). As in Exhibit 1, this model recognizes six basic steps: identification of the issue; analysis or assessment of the issue; development of

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Integrating Results for Risk Management into Practices at all Levels The results of risk management are to be integrated both horizontally and vertically into organizational policies, plans and practices. Horizontally, it is important that results be considered in developing organization-wide policies, plans and priorities. Vertically, functional units, such as branches and divisions, need to incorporate these results into programs and major initiatives. In practice, the risk assessment and response to risk would be considered in developing local business plans at the activity, division or regional level. These plans would then be considered at the corporate level, and significant risks (horizontal or high-impact risks) would be incorporated into the appropriate corporate business, functional or operational plan. The responsibility centre providing the advisory and corporate challenge functions can add value to this process, since new risks might be identified and new risk management strategies required after the roll-up. There needs to be a synergy between the overall risk management strategy and the local risk management practices of the organization. Each function or activity would have to be examined from three standpoints:
Its purpose: risk management would look at decision-making,

planning, and accountability processes as well as opportunities for innovation; function or activity is strategic, management or operational; and finance, human resources, and those regarding legal, scientific, regulatory, and/or health and safety issues.

Its level: different approaches are required based on whether a

The relevant discipline: the risks involved with technology,

Tools and Methods At a technical level, various tools and techniques can be used for managing risk. The following are some examples:
Risk maps: summary charts and diagrams that help

organizations identify, discuss, understand and address risks by portraying sources and types of risks and disciplines involved/needed;

al

nature and complexity of many government policy issues means that certain factors, such as communications and consultation activities, legal considerations, and ongoing operational activities, require active consideration at each stage of the process.

21

Modelling tools: such as scenario analysis and forecasting

Exhibit 3: A Risk Management Model

In developing methods to provide guidance on risk management, the different levels of readiness and experience in a department, as well as variations in available resources need to be recognized. Therefore, methods need to be flexible and simple using clear language to ensure open channels of communication.

Do cu
22

Several practical methods that could be used to provide guidance are:


A managers forum: where risks are identified, proposed

actions are discussed and best practices are shared;

An internal risk management advisory function: dedicated

to risk management, either as a special unit or associated with an existing functional unit; and as checklists, questionnaires, best practices.

Tool kits: a collection of effective risk management tools such

Communication and Consultation Communication of risk and consultation with interested parties are essential to supporting sound risk management decisions. In fact, communication and consultation must be considered at every stage of the risk management process. A fundamental requirement for practicing integrated risk management is the development of plans, processes and products through ongoing consultation and communication with stakeholders (both internal and external) who may be involved in or affected by an organizations decisions and actions.

ww Co w.p m dfw P iza D rd. F com Tr i


government.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

models to show the range of possibilities and to build scenarios into contingency plans;
Framework on the precautionary approach: a principle-

based framework that provides guidance on the precautionary approach in order to improve the predictability, credibility and consistency of its application across the federal government;
Qualitative techniques: such as workshops, questionnaires,

Consultation and proactive citizen engagement will assist in bridging gaps between statistical evidence and perceptions of risk. It is also important that risk communication practices anticipate and respond effectively to public concerns and expectations. A citizens request for information presents an opportunity to communicate about risk and the management of risk. In the public sector context, some high-profile risk issues would benefit from proactively involving parliamentarians in particular forums of discussion thus creating opportunities for exchanging different perspectives. In developing public policy, input from both the empirical and public contexts ensures that a more complete range of information is available, therefore, leading to the development of more relevant and effective public policy options. Internally, risk communication promotes action, continuous learning, innovation and teamwork. It can demonstrate how management of a localized risk contributes to the overall achievement of corporate objectives. Element 4: Ensuring Continuous Risk Management Learning Continuous learning is fundamental to more informed and proactive decision-making. It contributes to better risk management, strengthens organizational capacity and facilitates integration of risk management into an organizational structure. To ensure continuous risk management learning, pursue the following outcomes:
Learning from experience is valued, lessons are shareda supportive

and self -assessment to identify and assess risks; and


Internet and organizational Intranets: promote risk

awareness and management by sharing information internally and externally. Exhibit 3 provides an example of a risk management model. In this model, one can assess where a particular risk falls in terms of likelihood and impact and establish the organizational strategy/ response to manage the risk.

work environment. Learning plans are built into organizations risk management practices.
Results of risk management are evaluated to support innovation, capacity

building and continuous improvementindividual, team and organization.

Experience and best practices are sharedinternally and across

Creating a Supportive Work Environment A supportive work environment is a key component of continuous learning. Valuing learning from experience, sharing best practices and lessons learned, and embracing innovation and responsible risk-taking characterize an organization with a supportive work environment. An organization with a supportive work environment would be expected to:

Promote learning

By fostering an environment that motivates people to learn; By valuing knowledge, new ideas and new relationships as vital

aspects of the creativity that leads to innovation; and


By including and emphasizing learning in strategic plans. Learn from experience By valuing experimentation, where opportunities are assessed

for benefits and consequences; By sharing learning on past successes and failures; and
By using lessons learned and best practices in planning

exercises.
Demonstrate management leadership By selecting leaders who are coaches, teachers and good stewards;

Copy Right: Rai University

al

11D.571.3

By demonstrating commitment and support to employees

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

through the provision of opportunities, resources, and tools; and


By making time, allotting resources and measuring success

through periodic reviews (e.g., learning audits). Building Learning Plans in Practices Since continuous learning contributes significantly to increasing capacity to manage risk, the integration of learning plans into all aspects of risk management is fundamental to building capacity and supporting the strategic direction for managing risk. As part of a units learning strategy, learning plans provide for the identification of training and development needs of each employee. Effective learning plans, reflecting risk management learning strategies, are linked to both operational and corporate strategies, incorporate opportunities for managers to coach and mentor staff, and address competency gaps (knowledge and skills) for individuals and teams. The inclusion of risk management learning objectives in performance appraisals is a useful approach to support continuous risk management learning. Supporting Continuous Learning and Innovation In implementing a continuous learning approach to risk management, it is important to recognize that not all risks can be foreseen or totally avoided. Procedures are paramount to ensure due diligence and to maintain public confidence. Goals will not always be met and innovations will not always lead to expected outcomes. However, if risk management actions are informed and lessons are learned, promotion of a continuous learning approach will create incentives for innovation while still respecting organizational risk tolerances. The critical challenge is to show that risk is being well-managed and that accountability is maintained while recognizing that learning from experience is important for progress.

Do cu
encouraged and supported; decision-making; further action. Notes:
11D.571.3

In addition to demonstrating accountability, transparency and due diligence, proper documentation may also be used as a learning tool. Practicing integrated risk management should support innovation, learning, and continuous improvement at the individual, team and organization level. An organization demonstrates continuous learning with respect to risk management if:
An appropriate risk management culture is fostered;

Learning is linked to risk management strategy at many levels;

Responsible risk-taking and learning from experience is

There is considerable information sharing as the basis for Decision-making includes a range of perspectives including

the views of stakeholders, employees and citizens; and


Input and feedback are actively sought and are the basis for

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 23

al

Counter party or Credit Risk Dealing Risk is the sum total of all unsettled transactions due for all dates in future. If the Counter party goes bankrupt on any day, all unsettled transactions would have to be redone in the market at the current rates. The loss would be the difference between the original contract rate and the current rates. Dealing risk is therefore limited to only the movement in the prices and is measured as a percentage of the total exposure.

Settlement Risk Settlement risk is the risk of Counter party defaulting on the day of the settlement. The risk in this case would be 100% of the exposure if the corporate gives value before receiving value from the Counter party. In addition the transaction would have to be redone at the current market rates.

Do cu
Regulatory Frauds Custodial Systems Errors & Omissions

Operating Risk Operational risk is the risk that the organization may be exposed to financial loss either through human error, misjudgment, negligence and malfeasance, or through uncertainty, misunderstanding and confusion as to responsibility and authority. Following are the different kinds of operating risks: Legal

Legal Legal risk is the risk that the organisation will suffer financial loss either because contracts or individual provisions thereof are unenforceable or inadequately documented, or because the precise relationship with the counter party is unclear. Regulatory
24

ww Co w.p m dfw P iza D rd. F com Tr i


Front running Circular trading Insider trading Undisclosed Personal trading Routing deals to select brokers Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 4: DEFINING OPTIMAL RISK LEVEL


Risks that Corporations Faces
Risks that corporations faces, can be classified into three categories:
Price or Market Risk Counter party or Credit risk Operating Risk

Regulatory risk is the risk of doing a transaction, which is not as per the prevailing rules and laws of the country. Errors & Omissions Errors and omissions are not uncommon in financial operations. These may relate to price, amount, value date, currency, buy/sell side or settlement instructions. Frauds Some examples of frauds are:

Price or Market Risk This is the risk of loss due to change in market prices. Price risk can increase further due to Market Liquidity Risk, which arises when large positions in individual instruments or exposures reach more than a certain percentage of the market, instrument or issue. Such a large position could be potentially illiquid and not be capable of being replaced or hedged out at the current market value and as a result may be assumed to carry extra risk.

Custodial Custodial risk is the loss of prime documents due to theft, fire, water, termites etc. This risk is enhanced when the documents are in transit. Systems Systems risk is due to significant deficiencies in the design or operation of supporting systems; or inability of systems to develop quickly enough to meet rapidly evolving user requirements; or establishment of a great many diverse, incompatible system configurations, which cannot be effectively linked by the automated transmission of data and which require considerable manual intervention. Determining Optimal Risk Seasoned traders know the importance of risk management. If you risk little, you win little. If you risk too much, you eventually run to ruin. The optimum, of course, is somewhere in the middle. Here, Ed Seykota of Galt Capital and Dave Druz of Tactical Investment Management, present a method to measure risk and return. Placing a trade with a predetermined stop-loss point can be compared to placing a bet: The more money risked, the larger the bet. Conservative betting produces conservative performance, while bold betting leads to spectacular ruin. A bold trader placing large bets feels pressure or heat from the volatility of the portfolio. A hot portfolio keeps more at risk than does a cold one. Portfolio heat seems to be associated with personality preference; bold traders prefer and are able to take more heat, while more conservative traders generally avoid the circumstances that give rise to heat. In portfolio management, we call the distributed bet size the heat of the portfolio. A diversified portfolio risking 2% on each of five instrument & has a total heat of 10%, as does a portfolio risking 5% on each of two instruments. Our studies of heat show several factors, which are:

1. Trading systems have an inherent optimal heat.


11D.571.3

al

2. Setting the heat level is far and away more important than fiddling with trade timing parameters. 3. Many traders are unaware of both these factors. Coin Flipping One way to understand portfolio heat is to imagine a series of coin flips. Heads, you win two; tails, you lose one is a fair model of good trading. The heat question is: What fixed fraction of your running total stake should you bet on a series of flips? The participants generally come up with some amazingly complex ways to arrive at a solution. Overall, the simplest way is to notice that: In the long run, heads and tails balance.
The order of heads and tails doesnt matter to the outcome. The result after n sets of head/ tail cycles is just the result of

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

one head/ tail cycle raised to the nth power.

So we can get our answer simply by making a table of results of just one head/ tail cycle. Figure 1 represents such a heat test. It shows an optimal bet size of 25%, at which point one head/ tail cycle delivers 12.5% profit, after a 50% gain and a 25% draw down. As is typical of heat tests, at low heat, performance rises linearly with bet size. At high heat, performance falls as losses dominate, because draw downs are proportional to heat squared (see Figures 2 and 3). In practice, a trader may prefer to bet the coin at less than optimal heat, say 15% to 20%, taking a slightly smaller profit to avoid some draw downinduced stress. The results of the 12-year simulation recall the coin Flips described previously. Return initially rises with increasing heat and then falls as draw downs Dominate. Heat tests show profitability and volatility over a range of bet sizes. Heat tests can help traders communicate with their investors about and ultimately align on betting strategy before trading begins. Otherwise, investors may become disenchanted with traders who trade well yet ultimately deliver either too little or too much heat. Actual Heat Test To study actual portfolio betting strategies, we fired up our system-testing engine and simulated a trading system over a range of heats. The engine trades all instruments simultaneously. The engine rolls deliveries forward to stay with the most active deliveries. The results of the 12-year simulation recall the coin flips described previously. Return initially rises with increasing heat and then falls as drawdowns dominate. This heat test shows optimal performance for heat around 140% (about 28% per each of five instruments), at which point the system delivers about 55% return per annum with average draw down around 40% per annum and maximum draw down over 90%. In actual practice, few investors would have the stomach for such an optimum. Most would prefer fewer draws down and less gain. In any event, heat testing can provide a focus for traders and their investors and help align on critical issues of bet sizing, return and draw down before beginning new trading relationships.

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

FIGURE 1: The percent bet is the percentage bet of the running stake. The Win-on heads is always 200% of the bet. The Loseon tails is the bet The final total shows the result of one cycle. Beyond a 25% bet (lower half of table) the final total begins to suffer.

FIGURE 2: Plotting the return versus the heat illustrates that the optimal amount bet is 25% of the stake. The curve has a peak (point of zeroslope) at 25%

FIGURE 3: The optimal level of heat is near 140%; then increasing heat causes losses to dominate

al

25

FIGURE 4: As heat is increased, the size of the draw downs reaches maximum of over 90%. SIDEBAR: COIN FLIPPING MATH To find the optimal bet size for a coin that heads wins two times the fraction bet, and tails loses the fraction bet of the running total stake on tails: Return = (Results of Heads)(Results of Tails) = (1+ 2Bet)( 1-Bet) = 1 + Bet -2Bet 2 The optimal return at the top of the curve in Article Figure 2 is the point of zero slope of the curve. This is found by taking the first derivative: d( Return)/ d( Bet) = 0 0 + 1 -4Bet = 0 Bet = 0.25 Portfolio SIDEBAR: SYSTEM TEST

Five instruments: Soybean oil, live cattle, sugar, gold and Swiss francs Time span: December 19, 1979, through January 28, 1992

Trading system: Enter positions on stop close only, 2 ticks beyond the 20-week price range, updated weekly. Exit on stop 2 ticks beyond the three-week price range, also updated weekly. Bet size upon entry: (Equity)( heat)/ number of instruments Number of contracts per trade: (Bet size)/ entry risk based on the three-week risk point at the time of entry.

For example, for equity = $100,000, heat = 10% and number of instruments = 2: Bet size = ($ 100,000)( 0. 10)/( 2) = $5,000

Do cu
= $5,000/$ 2,500 = 2 contracts Notes:
26

The number of contracts per trade incorporates the risk identified by the three-week range at the time of an entry signal (2-tick breakout of the 20-week range SCO). Say a buy signal occurred and the three-week low is $2,500 per contract away. The number of contracts is: Number of contracts = (bet size)/( entry risk)

Caution: For purposes of demonstrating heat testing, we chose the 20-week and three-week box system for its simplicity. No claims are made about future performance. Indeed, the profitable results largely reflect having retrospectively placed some good trend commodities in the portfolio. Further more, the results were best in the early years and seemed to degenerate.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 5: CASE STUDY OF STANDARD TRUCKING CORPORATION


Introduction
The objective of this case is to acquaint the student with the basic financial considerations and processes employed in risk management. It focuses on the financial risk management principles underlying a sound risk management program. Use of the trucking industry for this case is arbitrary, and does not imply a risk management study for this industry per se. Emphasis is directed to the risk financing function as contrasted with identification, evaluation and control of risks. In practice all of these are essential to achieving success in risk management, and this lack of emphasis is not intended to diminish their importance. ibis case is written from the viewpoint of a beginning student of risk management, as opposed to that of an experienced, professional risk manager seeking to sharpen existing skills. which STC is domiciled, have supported STCs operating agreement with the contract drivers as being valid. The agreement between STC and the drivers establishes them as independent contractors, with the drivers leasing the trucks from STC for each trip. John Schmidt is also the President of Investors Trucking Association (ITA), a trade group of independently owned trucking companies. The purpose of this trade group is to support beneficial actions in legislation, share solutions to mutual problems, and to develop industry-wide operating standards for this segment of the trucking industry. Actions Already Taken Before Edwards began his task of getting ...a handle on the rising costs of property and liability insurance, it was necessary for him to obtain various records and information from STCs corporate files. Initially, the items he requested were STCs current financial statements: (Balance Sheet and Income Statement); (2) a Schedule of Insurance Costs; and (3) the Claim and Loss Records, (sometimes referred to as Claim Runs) supplied by each of STCs insurers. In examining these records, Edwards also considered how to approach controlling STCs cost of insurance. Until now, Edwards had always considered premium cost to be the cost of insurance. Now, as risk manager, he has a greater appreciation of the fact that the premiums can be influenced by such basic factors as errors in the information provided on the applications submitted to the various companies, as well as errors in classification, rating, or property valuation. Moreover, the ultimate cost can be determined only after any retrospective rating or other premium adjustments, including the audit, are completed, and the choice of the rating plan may be crucial. Although his examination of the records revealed no serious omissions, Edwards still was convinced there had to be some ways (heretofore neglected) to reduce insurance costs. Edwards was eager to explore alternatives to the purchase of commercial insurance to reduce STCs exposure to loss. He was aware that the insurance industry follows cycles of alternating tight and soft markets which may drive premiums up or down at renewal. Indeed, in a tight market it might even be difficult for insurance brokers to place some of STCs insurance coverage since the trucking industry is generally considered a less than desirable class of exposure in a restrictive insurance market. Finally, Edwards knows that one of the most direct ways to cut insurance costs is through a rigid program of loss prevention and control. FINANCIAL STATEMENTS Third-Year Abbreviated Balance Sheet (in thousands)

The Case Mark Edwards has recently joined STC Trucking Corporation (SIC) as Assistant Treasurer. In addition to other traditional financial duties, Edwards new responsibilities involve the risk management function, including purchase of insurance for the corporation. This activity was formerly handled by Al Avery, the present chief financial officer, a principal and cofounder of STC. Mr. Avery has charged Edwards to ... get a handle on the rising costs of property and liability insurance. Edwards enthusiastically accepted the challenge; however, he knew this would involve more than insurance alone. Background STC has been in operation for four years. It is a closely held corporation, owned by its President, John Schmidt (55%), Vice President and Treasurer, Al Avery (25%), and two other investors (10% each), who do not participate actively in the management of STC.

Do cu
11D.571.3

STC has been very successful in its various operations during its brief time in business. The principal business functions of SIC are the operation; leasing and management of long- haul units used in transportation of non-hazardous cargo, as a common carrier. The companys operations are divided between its ownership and operation of various tractor and trailer units, and the management of other investor-owned trucking units. The trucking units are either dispatched from one of the companys own two terminals, or from one of the terminals of a large trucking/ transportation company which contracts with STC and other similar trucking companies. STC presently owns some 100-truck units. All remaining units are owned by other independent investor syndicates, but are managed by STC under contract. Contract drivers who are paid by the mile for operating the trucks drive all trucks. Although these drivers are not considered to be employees of STC, because of contract requirements with several of the shippers, workers compensation coverage is carried voluntarily for these drivers by STC. No other employee benefits are provided for the contract drivers by STC. Recent court decisions in the state in

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

27

ww Co w.p m dfw P iza D rd. F com Tr i


NET of Deductible s 32,200 185,190 233,400 GENERAL LIABILITY PROPERTY (0) 0 ( 1) 714 (0) 0 (0) 0 (0) 0 ( 1) 1,600 ( 1) 850 (0) 0

Do cu

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES


28

ASSETS Current Assets Fixed Assets

LIABILITIES $640 Current Liabilities 10,400 Liabilities and Debt Total Liabilities SHAREHOLDERS EQUITY Stock Capital Total S/H's Equity 1,841 1,000 2,841
$ 100 - 500 501 - 2,000 2,001 - 10,000 10,001 & Above TOTALS FLEET LIABILITY

$299 7,900 8199

NUMBER & AMOUNT OF INCURRED LOSSES (Paid and Reserved) lst Year # WORKERS COMPENSATION (4) 700 (10) 2,500 (15) 3,100 ( 6) 6,200 (22) ( 8) 3,600 8,100 24,000 38,100 73,800 (3) 3,100 ( 4) 3,600 (2) 5,700 ( 3) 6,800 (0) 0 Amt. 2nd Year # Amt. 3rd Year # Amt. Present Year (Est.) # Amt.

( 5) 16,300 ( 7)

TOTAL ASSETS $11,040 TOTAL LIABILITY & S/H EQUITY $11,040

( 1) 12,500 ( 1) 28,000 ( 2)

(9) 9,500 (18) 25,400 (27) 53,600 (39)

Third-Year Income Statement


Income from Owned Units $5,170,000 Income from Investor Units 2,940,000 Other Income Gross Income Cost of Operations Gross Profit Other Expenses Net Profit (Before Taxes) 865,000 8,975,000 6,462,720 2,512,280 1,307,425 1,204,855

$ 100 - 500 501 - 2,000 2,001 - 10,000 10,000 & Above TOTALS

(1) 420

( 4) 1,610

( 6) 1,800 (10) 7,800

( 8) (13)

2,800 14,300 32,340 167,5OO 216,940

(2) 1,200 ( 4) 4,800 (0) 0

(4) 9,100 ( 6) 18,700 (11) 39,800 (14) ( 2) 36,500 ( 3) 96,000 ( 2) (7) 10,720 (16) 61,610 (30) 145,400 (37)

FLEET PHYSICAL DAMAGE $ 100- 500 501-2,000 2,001- 5,000 5,001- 10,000 10,001- 20,000 20,001 & Above (2) 750 ( 5) 1,510 ( 6) 1,370 ( 5) 3,110 (12) (10) 3,680 7,090 34,700 77,200 79,400 (3) 1,810 ( 4) 2,890 (1) 2,800 ( 4) 8,200

(3) 18,700 ( 9) 63,700 (12) 70,900 (19) (1) 15,700 ( 5) 72,000 ( 8) 105,700 ( 6) (0) 0 ( 1) 60,200 ( 2) 101,700 ( 2)

TOTAL (incldg deductibles) (10) 39,760 (28) 208,500 (39) 307,580 (60)

Schedule of Insurance Costs


1st Year 2nd Year PROPERTY Premium $ 750 Value 750 3rd

Present Year (Est.)

2,000

3,970

$1,500,000 1,500,000 2,000,000 3,300,000 29,700 57,800 98,060

QUESTIONS FOR DISCUSSION

WORKERS COMPENSATION Premium $ 12,570 Payroll

$ 601,400 1,340,400 2,001,400 3,100,900

GENERAL LIABILITY Premium $ 12,500 Sq. Ft. 22,000

($500,000 Combined Single Limit) 13,750 22,000

16,500 22,000

29,000 22,000

1. What information should Edwards be interested in obtaining from the financial records requested, and how might he use this information? Also, what other financial information might be useful? 2. [A] What information can Edwards obtain from the Schedule of Insurance Costs, and how might he use this information? [B] What other insurance information might be useful? 3. [A] What information can be found in the Loss Runs, and how might Edwards use this information? [B] What other information might be helpful in reducing loss costs?

FLEET LIAB. OWNED & MANAGED Premium $ 50,000 Units 50 82,500 80 180,000 120

(BI: $250,000 Per Pers./$500,000 Per Acc./ PD: $250,000) 360,000 160

FLEET PHYSICAL DAMAGE Premium $ 120,000 172,000 Value Deductible $ 1,000 UMBRELLA 1,000

410,000 2,500

618,000 2,500

$3,000,000 4,800,000 8,400,000 12,000,000

($5,000,000 Excess of Primary) Premium $ 12,500 16,000

36,000

75,000

LOSS INFORMATION SUMMARY

4. What can Edwards do now to reduce the costs of insurance to cover STCs exposures to loss? 5. Discuss the alternative risk financing methods Edwards might consider, and their potential advantages and disadvantages. In addition to the obvious areas of concern (fleet liability, fleet physical damage, and workers compen 6. sation), what other areas of potential risk should concern Edwards? 7. What should Edwards consider in determining the optimal deductible decisions for his companys property insurance and self insured retention (SIR) for liability insurance? 8. What other actions might Edwards take to reduce STCs costs for fleet exposures, other than improved risk financing methods? Notes:

Copy Right: Rai University

al
( 6) 24,800 (11) 114,800 316,870 212,200

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 6: INTERACTIVE SESSION

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 29

al

LESSON 7: BASIC CONCEPT OF RISK MEASUREMENT


Chapter Objective
Discuss frameworks for identifying business and individual

UNIT I CHAPTER 4 RISK IDENTIFICATION & MEASUREMENT


exceed market value. Replacement cost new is the cost of replacing the damaged property with new property. Due to economic depreciation and improvements in quality, replacement cost new often will exceed the market value of the property.

Do cu
Property Loss Exposure
30

Identifying Business Risk Exposures The first step in the risk management process is risk identification: the identification of loss exposures. Unidentified loss exposures most likely will result in an implicit retention decision, which may not be optimal. There are various methods of identifying exposures. For example, comprehensive checklists of common business exposures can be obtained from risk management consultants and other sources. Loss exposures also can be identified through analysis of the firms financial statements, discussions with managers throughout the firm, surveys of employees, and discussions with insurance agents and risk management consultants. Regardless of the specific methods used, risk identification requires an overall understanding of the business and the specific economic, legal, and regulatory factors that affect the business.

Some of the major practical questions asked when identifying property loss exposures for businesses are listed in Table 3.1. In addition to identifying what property is exposed to loss and the potential causes of loss, the firm must consider how property should be valued for the purpose of making risk management decisions. Several valuation methods are available. Book valuethe purchase price minus accounting depreciationis the method commonly used for financial reporting purposes. However, since book value does not necessarily correspond to economic value, it generally is not relevant for risk management purposes . Market value is the value that the next-highestvalued user would pay for the property. Firm-specific value is the value of the property to the current owner. If the property does not provide firm-specific benefits, then firm-specific value will equal market value. Otherwise, firm-specific value will

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

risk exposures.
Review concepts from probability and statistics. Apply mathematical concepts to understand the frequency and

Explain the concepts of maximum probable loss and value at

risk. Risk Identification The five major steps in the risk management decision-making process are: (1) identify all significant risks that can cause loss; (2) evaluate the potential frequency and severity of losses; (3) develop and select methods for managing risk; (4) implement the risk management methods chosen; and (5) monitor the suitability and performance of the chosen risk management methods and strategies on an ongoing basis. This chapter focuses on the first two steps of this process.

Indirect losses also can arise from damage to property that will be repaired or replaced. For example, if a fire shuts down a plant for four months, the firm not only incurs the cost of replacing the damaged property, it also loses the profits from not being able to produce. In addition, some operating expenses might continue despite the shutdown (e.g., salaries for certain managers and employees and advertising expenses). These exposures are known as business income exposures (or, sometimes, business interruption exposures), and they frequently are insured with business interruption insurance. Note that business interruption losses also might result from property losses to a firms major customers or suppliers that prevent them from transacting with the firm. This exposure can be insured with contingent business interruption insurance. Firms also may suffer losses after they resume operations if previous customers that have switched to other sources of supply do not return. In the event that a long-term loss of customers would occur and/or a shutdown temporarily would impose large
11D.571.3

al

severity of losses.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

costs on customers or suppliers, it might be optimal for the firm to keep operating following a loss by arranging for the immediate use of alternative facilities at higher operating costs. The resulting exposure to higher costs is known as the extra expense exposure. Insurance purchased to reimburse the firm for these higher costs is known as extra expense coverage. Liability Losses As we analyze in detail in later chapters, firms face potential legal liability losses as a result of relationships with many parties, including suppliers, customers, employees, shareholders, and members of the public. The settlements, judgments, and legal costs associated with liability suits can impose substantial losses on firms. Lawsuits also may harm firms by damaging their reputation, and they may require expenditures to minimize the costs of this damage. For example, in the case of liability to customers for injuries arising out of the firms products, the firm might incur product recall expenses and higher marketing costs to rehabilitate a product. Losses to Human Resources Losses in firm value due to worker injuries, disabilities, death, retirement, and turnover can be grouped into two categories. First, as a result of contractual commitments and compulsory benefits, firms often compensate employees (or their beneficiaries) for injuries, dis- abilities, death, and retirement. Second, worker injuries, disabilities, death, retirement, and turnover can cause indirect losses when production is interrupted and employees cannot be replaced at zero cost with other employees of the same quality. In some cases, firms purchase life insurance to compensate for the death or disability of important employees.

insurance. The risk of a drop in earnings prior to retirement due to external economic factors is also an important risk facing households. Private methods for dealing with this risk, except for perhaps investments in education, are limited. Some public support often is available in the form of compulsory social insurance and unemployment insurance programs. One of the most important sources of risk for most individuals and families is from medical expenses. The methods of dealing with this risk vary across countries. Some countries, like the United States, rely largely on the private medical and insurance industry to provide or pay for services and insurance to deal with medical expense risk. Other countries, such as Canada and the United Kingdom, rely more on government provision of medical services and insurance. Another major source of expense risk is from personal liability exposures. Individuals can be sued and held liable for damages inflicted on others. The main sources of personal liability arise from driving an automobile and owning property with potential hazards. Using loss control and purchasing liability insurance typically manage these risks. Retirement often implies a large drop in earnings. To continue to pay living expenses during retirement, an individual needs to have saved substantial funds prior to retirement and/or rely on public programs, such as social security. The risk associated with pre-retirement savings and thus the risk of not having sufficient assets during retirement to fund expenses depends on how the assets are invested. The choice of assets, (for example, between stocks, bonds, and real estate) is an important risk management decision for all individuals and households. Even after someone has retired with substantial assets, the person faces the risk of living so long all savings are depleted prior to death. This longevity risk can be managed using annuities, including government mandated annuities, such as those provided in the U.S. social security system.

Do cu
11D.571.3

Losses from External Economic Forces The final category of losses arises from factors that are outside of the firm. Losses can arise because of changes in the prices of inputs and outputs. For example, increases in the price of oil can cause large losses to firms that use oil in the production process. Large changes in the exchange rate between currencies can increase a multinational firms costs or decrease its revenues. As another example, an important supplier or purchaser can go bankrupt, thus increasing costs or decreasing revenues. Identifying Individual Exposures One method of identifying individual/family exposures is to analyze the sources and uses of funds in the present and planned for the future. Potential events that cause decreases in

the availability of funds or increases in uses of funds represent risk exposures (see Box 3.1). Because both physical and financial assets represent potential future sources of funds, potential losses in asset values also represent risk exposures. Just as business risk management consultants can aid in the identification of business risks, individual/family financial planners can help identify and then manage personal risks. An important risk for most families is a drop in earnings prior to retirement due to the death or disability of a breadwinner. The magnitude of this risk depends, among other factors, on the number and age of dependents and on alternative sources of income (e.g., a spouses income or investment income). The losses due to death or disability can be managed with life and disability

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Basic Concepts from Statistics & Probability

al

31

Risk assessment and measurement require a basic understanding of several concepts from probability and statistics. We review these concepts in this section. Random Variables and Probability Distributions A random variable is a variable whose outcome is uncertain. For example, suppose a coin is to be flipped and the variable X is defined to be equal to $1 if heads appears and _$1 if tails appears. Then prior to the coin flip, the value of X is unknown; that is, X is a random variable. Once the coin has been flipped and the outcome revealed, the uncertainty about X is resolved, because the value of X is then known.

example, Table 3.2 gives the probability distribution for X.

In addition to describing a probability distribution by listing the outcomes and probabilities, we also can describe probability distributions graphically. Figure 3.1 illustrates the probability distribution for the coin flipping example. On the horizontal axis, we graph the possible outcomes. On the vertical axis, we graph the probability of a particular outcome. There are only two possible outcomes in this very simple example: $1 and _$1, and the probability of each is 0.5. When discussing random variables, we use the term actual or observed outcome (or, sometimes realized outcome) to refer to the outcome observed (realized) in a particular case, as opposed to the possible outcomes that could have occurred. In the coin flipping example, once the coin has been tossed we can observe the actual outcome, which either must be $1 or _$1.

Do cu
32

As emphasized in the first two chapters, risk management decisions need to be made prior to knowing what the actual (realized) outcomes of key variables will be. Managers do not know beforehand which outcomes of the random variables affecting the firms profits will occur. Nevertheless, they must make decisions. Once the outcomes are observed, it usually is easy to say what would have been the best decision. However, we cannot evaluate decisions from this perspective, which is why probability distributions are so important. Probability distributions tell us all of the possible outcomes and the probability of those outcomes. Information about probability distributions is needed to make good risk management decisions. As a second example of a probability distribution, we can approximate the probability distribution for the dollar amount of damages to your car during the coming year. For simplicity, our approximation will assume only five possible levels of damages: $0; $500; $1,000; $5,000; and $10,000. The probabilities of each of these outcomes are listed in Table 3.3. The most likely outcome is zero damages, and the least likely outcome is that damages equal $10,000. Note that the sum of the probabilities equals 1; this must always be the case. An alternative way of describing the probability distribution is provided by Figure 3.2, where the height of each dotted line gives the probability of each possible outcome

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

for the random variable and the probability of the outcomes. For the coin flipping

As a final example, consider an automaker. Two of the many reasons why the automakers profits are uncertain are steel price changes and labor conditions. In the language just introduced, the automakers profits are a random variable. There are numerous possible outcomes for the automakers profits. For example, steel prices could increase so much that profits could be negative. On the other hand, favorable outcomes for steel prices and the economy could cause very high profits. What is the probability distribution for the automakers profits? Recall that a probability distribution identifies all of the possible outcomes and associates a probability with each outcome. The coin flipping example had only two possible outcomes and so listing the probabilities was simple. In the automaker example, however, we could spend hours listing all the possible outcomes for profits and still not be finished, due to the large number of possible outcomes. In these situations, it is useful to assume that the possible outcomes can be any number between two extremes (the minimum possible outcome and the maximum possible outcome) and that the probability of the outcomes between the extremes is represented by a specific mathematical function.2 For example, assume that profits for the automaker could be any number between _$20 million and $50 million. Just as with the earlier graphs, we can identify the possible outcomes for profits between these amounts on the horizontal axis of Figure 3.3, which illustrates the probability distribution for the automakers profits. Analogous to the earlier graphs, the vertical axis will measure the probability of the possible outcomes. The probabilities of the outcomes are illustrated in Figure 3.3 by a bell shaped curve, which might appear familiar to you.

al
11D.571.3

Information about a random variable is summarized by the random variables probability distribution. In particular, a probability distribution identifies all the possible outcomes

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

probability distribution for property losses due to an earthquake for a business that has property valued at $50 million. Identify on your graph the probability that losses will exceed $30 million. Characteristics of Probability Distributions In many applications, it is necessary to compare probability distributions of different random variables. Indeed, most of the material in this book is concerned with how decisions (e.g., whether to purchase insurance) change probability distributions. Understanding how decisions affect probability distributions will lead to better decisions. The problem is that most probability distributions have many different outcomes and are difficult to compare. It is therefore common to compare certain key characteristics of probability distributions: the expected value, variance or standard deviation, skew ness, and correlation.

Since the area under the curve in Figure 3.3 equals 1, we can graphically identify the probability that profits are within a certain interval. For example, the probability that profits are greater than $40 million is the area under the curve to the right of $40 million. The probability that profits are less than $0 is the area under the curve to the left of $0. The probability that profits are between $10 and $30 million is the area under the curve between $10 and $30 million. Thus, the bell-shaped curve in Figure 3.3 tells us that for the automaker, there is a relatively high probability that profits will be between $10 and $30 million. In contrast, while very low profits and very high profits are possible, they do not have a high probability of happening.

Do cu
Concept Checks
11D.571.3

1. What information is given by a probability distribution? What are the two ways of describing a probability distribution? 2. Earthquakes are rare, but the property damage can be very large when they occur. Illustrate these features by drawing a

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Recall that the sum of the probabilities of all the possible outcomes must equal 1 (some outcome must occur). In the coin flipping example and the automobile damage example, this property is easy to verify because the number of possible outcomes is small. Stating that the probabilities sum to 1 in these examples is equivalent to stating that the heights of the dotted lines in Figures 3.1 and 3.2 sum to 1. This is a useful observation because it helps to illustrate the analogous property in the automaker example, where any outcome between _$20 million and $50 million is possible. You can think of the curve in Figure 3.3 as a curve that connects the tops of many thousands of bars that have very small widths, and the sum of the heights of all these bars is equivalent to the area under the curve.4 Thus, stating that the probabilities must sum to 1 is equivalent to stating that the area under the curve must equal 1.

Expected Value

The expected value of a probability distribution provides information about where the outcomes tend to occur, on average. For example, if the expected value of the automakers profits is $10 million, then profits should average about $10 million. Thus, a distribution with a higher expected value will tend to have a higher outcome, on average. To calculate the expected value, you multiply each possible outcome by its probability and then add up the results. In the coin flipping example there are two possible outcomes for X, either $1 or $1. The probability of each outcome is 0.5. Therefore, the expected value of X is $0: If one were to play the coin flipping game many times, the average outcome would be approximately $0. This does not imply that the actual value of X on any single toss will be $0; indeed, the actual outcome for one toss is never $0. To define expected value in general terms, let the possible outcomes of a random variable, X, be denoted by x 1, x 2, x 3, . . ., xM (these correspond to _$1 and $1 in the coin flipping example) and let the probability of the respective outcomes be denoted by p1, p2, p3, . . . , pM (these correspond to the 0.5s in the coin flipping example). Then, the expected value is defined mathematically as:

If we examine a probability distribution graphically, we often can learn something about the expected value of the distribution. For example, Figure 3.4 illustrates two probability distributions. Since the distribution for A is shifted to the right compared with B, distribution A has a higher expected value than distribution B. When distributions are symmetric, as in Figure 3.4, identifying the expected value is relatively easy; it is the midpoint in the range of possible outcomes. When the probability distributions are not symmetric, identifying the expected value by examining a diagram sometimes can be difficult. Nevertheless, you often can compare the expected values of different distributions visually. Consider, for example, the two distributions illustrated in Figure 3.5. Distribution C has a higher expected value than distribution D. Intuitively, the high outcomes are more likely with distribution C than with D, and the low outcomes are less likely with C than with D.
33

al

Many risk management decisions depend on the probability distribution of losses that can arise from lawsuits, worker injuries, damage to property, and the like. When a probability distribution is for possible losses that can occur, the distribution is called a loss distribution. The expected value of the distribution is called the expected loss.

Concept Check

3. What is the expected value of damages for the distribution listed in Table 3.3? Variance and Standard Deviation The variance of a probability distribution provides information about the likelihood and magnitude by which a particular outcome from the distribution will differ from the expected value. In other words, variance measures the probable variation in outcomes around the expected value. If a distribution has low variance, then the actual outcome is likely to be close to the expected value. Conversely, if the distribution has high variance, then it is more likely that the actual (realized) outcome from the distribution will be far from the expected value. A high variance therefore implies that outcomes are difficult to predict. For this reason, variance is a commonly used measure of risk. In some instances, however, it is more convenient to work with the square root of the variance, which is known as the standard deviation.

Do cu
34

To illustrate variance and standard deviation, consider three possible probability distributions for accident losses. Each distribution has three possible outcomes, but the outcomes and the probabilities differ. The three probability distributions are shown in Table 3.4.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

For each of the loss distributions in Table 3.4, the expected value is $500 (you should verify this for yourself), but the variances of the three distributions differ. Loss distribution 2 has a larger variance than distribution 1, because the extreme outcomes for distribution 2 are farther from the expected value than they are for distribution 1. Distribution 3 has a larger variance than distribution 2, because even though the outcomes are the same for distributions 2 and 3, the extreme outcomes are more likely with distribution 3 than with distribution 2. That is, the probability of having a loss far from the expected value ($500) is greater with distribution 3 than with distribution 2. The comparison of distributions 2 and 3 illustrates that the variance depends not only on the dispersion of the possible outcomes but also on the probability of the possible outcomes. The mathematical definitions of variance and standard deviation show precisely how the probabilities of the different outcomes and the deviation of each outcome from the expected value affect these measures of risk. The definitions are:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Notice that the quantity in parentheses measures the deviation of each outcome from the expected value. This difference is squared so that positive differences do not offset negative differences. Each squared difference is then multiplied by the probability of the particular outcome so those outcomes that are more likely to occur receive greater weight in the final sum than those outcomes that have a low probability of occurrence. Additional insights about these measures of risk can be gained by going step-by-step through the calculations for distribution 1 introduced above. Table 3.5 provides this analysis. It indicates that distribution 1 has a standard deviation equal to $204. Similar calculations for distributions 2 and 3 (not shown) indicate that their standard deviations equal $408 and $447, respectively. As noted earlier, variance and standard deviation measure the likelihood that and magnitude by which an outcome from the probability distribution will deviate from the expected value. They thus measure the predictability of the outcomes. As a consequence,

al

11D.571.3

when referring to risk as variability around the expected value, we generally will measure risk using variance or standard deviation.5 Like expected values, standard deviations of distributions often can be compared by visually inspecting the probability distributions. For example, Figure 3.6 illustrates two distributions for accident losses. Both have an expected value of $1,000, but they differ in their standard deviations. There is a greater chance that an outcome from distribution A will be close to the expected value of $1,000 than with distribution B.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Concept Checks

1. Explain why variance and standard deviation are useful measures of risk. 2. Without doing any calculations, can you compare the standard deviations of the following distributions?

Do cu
Notes:
11D.571.3

1. Compare the expected values and standard deviations of distributions A and B illustrated in the following figure:

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 35

al

Do cu
Concept Check
36

The sample standard deviation (or, similarly, the sample variance) reflects the variation in outcomes of a particular sample from a distribution. It is calculated with the same formula that we used above for the standard deviation but with three differences. First, only the outcomes that occur in the sample are used. Second, the sample mean is used instead of the expected value, which usually is not known. Third, the squared deviations between the outcomes and the sample mean are multiplied by the proportion of times that the particular outcome actually occurs in the samplerather than by the proportion of times that the outcome is likely to occur, according to the probability distribution.

It is useful to introduce the sample mean and sample standard deviation at this point for several reasons. First, the probability distributions for random variables that concern managers generally are not known. The sample mean and sample standard deviation sometimes can be used to estimate the unknown expected value and standard deviation of a probability distribution. Thus, estimation of the expected value and standard deviation of losses is often very important in risk management. In addition, the concept of the average loss for a group of people that pools its risk (i.e., the sample mean loss for the group) and the standard deviation of the average loss for the group (i.e., the sample standard deviation) are to explain how pooling can reduce risk. Finally, you will no doubt calculate sample means and sample standard deviations if you take a statistics course. We dont want you to confuse the expected value and standard deviation of the underlying probability distribution with the sample mean and sample standard deviation for a particular sample.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 8: BASIC CONCEPTS OF STATISTICS & PROBABILITY


Sample Mean and Sample Standard Deviation Sometimes the expected value is called the mean of the distribution. We avoid using this term because it leads to confusion with another concept: the average value from a sample of outcomes from a distribution, which also is known as the sample mean. A simple illustration will help you understand the difference between the average outcome from a sample (the sample mean) and the expected value of the probability distribution. Assume that there is a 0.5 probability that the fertilization of an egg will produce a female, and there is a 0.5 probability that the fertilization will produce a male.6 The group of babies born this month in the town where you live can be viewed as a sample from this distribution. The sample mean proportion of females is the number of females in the sample divided by the total number of newborns in the sample. The sample mean proportion generally will differ from the expected value of 0.5 due to random fluctuations (unless there are lots and lots of babies in the sample). Similarly, if the expected loss from accidents for a large group of people is $500, the sample mean loss or average loss during a given time period for a sample of these people will differ from the expected value due to random fluctuations. 1. Recall the coin flipping game discussed earlier in the chapter where you win $1 if heads appears and lose $1 if tails appears. What is the expected value of the outcome from the game if it is played only one time? Calculate the sample mean and sample standard deviation if the game is played five times with the following results: T, T, H, T, H. Skewness Another statistical concept that is important in the practice of risk management is the skewness of a probability distribution. Skewness measures the symmetry of the distribution. If the distribution is symmetric, it has no skewness. For example, consider the two distributions for accident losses illustrated in Figure 3.7. The distribution at the top of Figure 3.7 is symmetric; it has zero skewness. However, the distribution at the bottom is not symmetric; it has positive skewness. Many of the loss distributions that are relevant to risk management are skewed. Note how the skewed distribution has a higher probability of very low losses and a probability of very high losses when compared to the symmetric distribution. Recognizing this characteristic of skewed distributions is important when assessing the likelihood of large losses. If you incorrectly assume that the loss distribution is symmetric (you think that losses have distribution 1 when they really have distribution 2 in Figure 3.7), you will underestimate the likelihood of very large losses.

al

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Concept Check 1. Draw a distribution that might describe your automobile liability losses for the coming year (i.e., the losses that you could cause to other people for which you could be sued and held liable). Maximum Probable Loss and Value-at-Risk A frequently used measure of risk is maximum probable loss or value-at-risk, Although used in different contexts, these terms essentially mean the same thing. Maximum probable loss usually describes a loss distribution, whereas value-at-risk describes the probability distribution for the value of a portfolio or the value of a firm subject to loss. Suppose that the probability distribution for annual liability losses is described by the probability density function in Figure 3.8. Since the random variable being described is losses, high values are bad and low values are good. If $20 million is the maximum probable loss (MPL) at the 5 percent level, the probability that losses will be greater than $20 million is 5 percent. (That is, the area under the probability density function to the right of $20 million is 0.05.) If $30 million is the MPL at the 1 percent level, the probability that losses will be greater than $30 million is 0.01.

To illustrate value-at-risk, consider the probability distribution for the change in the value of an investment portfolio over a month depicted in Figure 3.9. Since the random variable being described is portfolio value changes, high values are good and low values are bad. If $5 million is the monthly value-at-risk for this portfolio at the 5 percent level, the probability that the portfolio will lose more than $5 million over the month is 5 percent. (The area under the density function to the left of _$5 million is 0.05.) If $7.5 million is the monthly value-at-risk at the 1 percent level, the probability that the portfolio will lose more than $7.5 million over the month is 0.01.

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Correlation
Copy Right: Rai University

To this point, we have limited our discussion to probability distributions of a single random variable. Because businesses and individuals are exposed to many types of risk, it is important to identify the relationships among random variables. The correlation between random variables measures how random variables are related. If the correlation between two random variables is zero, then the random variables are not related. Intuitively, if two random variables have zero correlation, then knowing the outcome of one random variable will not give you information about the outcome of the other random variable. For example, an automaker has risk due to an uncertain number of product liability claims for autos previously sold and also due to uncertain steel prices. There is no reason to believe that these two variables will be related. Knowing that steel prices are high will not imply anything about the frequency or severity of liability claims for autos already sold. Similarly, knowing that a large liability claim for damages has occurred will not imply anything about steel prices. Thus, the correlation between steel prices and product liability costs (for past sales) is zero. When the correlation between random variables is zero, we will say that the random variables are independent or uncorrelated. These terms are used because they suggest that the outcome observed for one distribution is unrelated to the outcome observed for the other distribution.

al

These concepts are easily illustrated with simple examples.

Many large corporations estimate maximum probable losses from different exposures to evaluate risk. Most large financial institutions calculate a daily measure of value-at-risk.8 To illustrate this concept, suppose that Mr. David, the risk manager at First Babbel Corp., receives a report that the firms daily value-at-risk at the 5 percent level is $50 million. This number tells Mr. David that the firm has a 5 percent chance of losing more than $50 million over the coming day. If Mr. David determines that the firm should not take this much risk, he might take actions to reduce the firms value-at-risk, such as hedging or selling some risky assets. After taking these risk management actions, presumably the firms value at risk would drop to an acceptable level. See Box 3.2.

37

In many cases random variables will be correlated. For example, a recession may decrease the demand for new cars and also decrease steel prices. Thus, the demand for new cars and steel prices both are affected by general economic conditions, and as a result, the demand for new cars and steel prices are correlated. When demand for new cars is high, steel prices also tend to be high.

Positive correlation implies that the random variables tend to move in the same direction. For example, the returns on common stocks of different companies are positively correlated the return on one stock tends to be high when the returns on other stocks are high. Random variables can be negatively correlated as well. Negative correlation implies that the random variables tend to move in opposite directions. For example, sales of sunglasses and sales of umbrellas on any given day in a given city are likely to be negatively correlated.

Do cu
Concept Check
38

You should keep in mind that positive (negative) correlation does not imply that the random variables will always move in the same (opposite) direction. Positive correlation simply implies that when the outcome of one random variablefor example, the demand for carsis above (below) its expected value, the other random variablefor example, steel coststends to be above (below) its expected value. Similarly, negative correlation implies that when one random variablefor example, sales of sunglassesis above (below) its expected value, the other random variablefor example, umbrella salestends to be below (above) its expected value. 1. For each scenario below, explain whether the correlation between random variable 1 and random variable 2 is likely to be zero (the random variables are uncorrelated), positive, or negative.

a. Random variable 1: Your automobile accident costs for the coming year. Random variable 2: The automobile accident costs of a student in another country for the coming year. b. Random variable 1: The property damage due to hurricanes in Miami, Florida, in September.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Random variable 2: The property damage due to hurricanes in Ft. Lauderdale, Florida, in September. c. Random variable 1: The property damage due to hurricanes in Miami, Florida, in September 2003. Random variable 2: The property damage due to hurricanes in Miami, Florida, in September 2008. d. Random variable 1: The number of people in New York who die from AIDS in the year 2008. Random variable 2: The number of people in London who die from AIDS in the year 2008. Evaluating the Frequency and Severity of Losses After identifying loss exposures, a risk manager ideally would obtain information about the entire probability distribution of losses and how different risk management methods affect this distribution. Frequently, risk managers use summary measures of probability distributions, such as frequency and severity measures, as well as expected losses and the standard deviation of losses during a given period. These measures help a risk manager assess the costs and benefits of loss control and retention versus insurance. We therefore illustrate how these summary measures can be obtained in practice. Frequency The frequency of loss measures the number of losses in a given period of time. If historical data exist on a large number of exposures, then the probability of a loss per exposure (or the expected frequency per exposure) can be estimated by the number of losses divided by the number of exposures. For example, if Sharon Steel Corp. had 10,000 employees in each of the past five years and over the five-year period there were 1,500 workers injured, and then an estimate of the probability of a particular worker becoming injured would be 0.03 per year (1,500 injuries/50,000 employee-years). When historical data do not exist for a firm, frequency of losses can be difficult to quantify. In this case, industry data might be used, or an informed judgment would need to be made about the frequency of losses. Severity The severity of loss measures the magnitude of loss per occurrence. One way to estimate expected severity is to use the average severity of loss per occurrence during a historical period. If the 1,500 worker injuries for Sharon Steel cost $3 million in total (adjusted for inflation), then the expected severity of worker injuries would be estimated at $2,000 ($3,000,000/ 1,500). That is, on average, each worker injury imposed a $2,000 loss on the firm. Again due to the lack of historical data and the infrequency of losses, adequate data may not be available to estimate precisely the expected severity per occurrence. With a little effort, however, risk managers can estimate the range of possible loss severity (minimum and maximum loss) for a given exposure. Expected Loss and Standard Deviation When the frequency of losses is uncorrelated with the severity of losses, the expected loss is simply the product of frequency and severity. Thus, the expected loss per exposure in our example can be estimated by taking expected loss severity per

al

11D.571.3

occurrence times the expected frequency per exposure. Expected loss obviously is an important element that affects business value and insurance pricing. Thus, accurate estimates of expected losses can help a manager determine whether insurance will increase firm value. Continuing with the Sharon Steel example, the annual expected loss per employee from worker injury is 0.03 X $2,000 = $60. With 10,000 employees, the annual expected loss is $600,000. Ideally, many firms also will estimate the standard deviation of losses for the total loss distribution or for losses in different size ranges. One way to summarize information about potential losses is to create a table for various types of exposures (property, liability, etc.) that provides characteristics of the probability distribution of losses for the particular type of exposure. An example for Sharon Steels property exposures is provided in Table 3.6. To create an accurate categorization of a firms loss exposures (like Table 3.6), considerable information, time, and expertise are needed. For most companies, especially smaller ones and new ones, detailed data on loss exposures do not exist. Nevertheless, the framework of Table 3.6 still can be used. For example, each type of exposure can be classified as having low, medium, or high frequency and severity. Table 3.7 provides an example for Penn Steel Corp., a firm that is engaged in the same activities and is of the same size as Sharon Steel Corp.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Tables 3.6 and 3.7 both show that the standard deviation of losses for high frequency, low severity losses is low, while the standard deviation is high for low frequency losses with high potential severity. This relationship is fairly general: Infrequent but potentially large losses are less predictable and pose greater risk than more frequent, smaller losses. Using the type of information illustrated in these tables, firms pay particular attention to exposures that can produce potentially large, disruptive losses, either from a single event or from the accumulation of a number of smaller but still significant losses during a given period.

Do cu
11D.571.3

Identifying & Measuring Exposure

Risk Profile of a Financial Conglomerate: Measurement and Management

The special problem of capital management in a conglomerate stems from the need to aggregate risks across a diverse set of businesses. This section seeks to size the magnitude of aggregation effects by examining the risk profile of a banking-insurance conglomerate. In effect, it asks just how big a problem risk aggregation actually is: are aggregation effects so severe that a typical conglomerate is over-capitalized to the point of inefficiency (in particular to shareholders), or under-capitalized to the point where it causes undue risk of insolvency to debt holders and policyholders? Capital Management of a Financial Conglomerate

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

A financial conglomerate is, by definition, a combination of diverse businesses operating under a common ownership structure. As shown in Figure 1, the organization can be broken down into major business lines, each of which has a distinct risk profile. For example, the universal banking activities consisting of retail, corporate, and investment banking are dominated by credit risk. Life insurance activities are dominated by market (investment) risk, and P&C activities by insurance (CAT) risk. In addition, non-licensed subsidiaries may be part of the conglomerate structure. While the common ownership structure is typically in the form of a holding company, it need not be. The analysis and results in this section are independent of the specific legal structure of the conglomerate. The capital management problem for a conglomerate is to determine both within and across businesses how much capital is required to support the level of risk taking. The problem is of concern to different constituencies. On the one hand, debtholders, policyholders, regulators and rating agencies are primarily concerned with the solvency of the institution. For them, the key issue is whether the institution holds sufficient capital to absorb risk or the potential for loss under all but the most extreme loss scenarios. Put another way, they are seeking capital to backstop the organizations risk taking at a high degree of confidence. On the other hand, shareholders, participating policyholders, and investment analysts are primarily concerned with the profitability of the institution. For them, the key issue is whether the institution is earning a sufficient return (at or above hurdle rate) on the capital invested to support risk taking. While capital is the common denominator that links the debtholders and shareholders, their interests pull in different directions. Lower capital for a given degree of risk taking will make an institution less solvent, but more profitable, and vice versa. A conglomerate poses special challenges for capital management: internally, for managers charged with balancing the risk, capital, and return equation, and externally, for regulators concerned with the safety and soundness of the institution. The first challenge is to determine the standalone risk of an activity within a business line such as the credit risk in a commercial loan portfolio or the catastrophe risk in a P&C insurers homeowners portfolio. A number of techniques have been developed within banks and insurance companies to assess risk at this level. The next challenge
39

al

At the same time, the top holding company within a conglomerate structure may also be an unlicensed entity. Because of the scope of activity, the holding company raises unique issues of aggregation. Economic Capital as a Common Currency for Risk In order to assess capital requirements in a diverse conglomerate, there needs to be a common currency for risk that can equate risk taking in one activity or business line with another. Economic capital is often used as that common currency for risk measurement, irrespective of where the risk is incurred. Under the economic capital approach, the risks of a conglomerate or of the individual businesses within it can be classified into primary components of asset risk, liability risk, and operating risk. As shown in Figure 2, these risks can be further decomposed. Asset risk is typically broken down into credit risk and market/ALM (asset/liability mis-match or management) risk. Insurance liability risks can be separated into P&C catastrophe risk; P&C experience-based risks; and life risks. Operating risks can be split into business risk and event risks.

Do cu
40

Although each of these risks has distinct statistical properties as reflected in the stylized risk distributions shown for them

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

is to combine the different risk factors within a business line or licensed subsidiary. Because risks are less than perfectly correlated, they cannot be strictly added together. The licensed subsidiaries include banks, insurance companies, and securities firms that are subject to specific capital requirements. The potential for diversification suggests that the whole will be less than the sum of the parts. A further challenge for a conglomerate results from the presence of unlicensed subsidiaries. They generally fall into two categories: financial subsidiaries that engage in financial businesses such as financing, insurance, and brokering outside of a licensed banking group, insurance company or securities firm; and commercial subsidiaries that are principally engaged in non-financial activities. In both cases, unlicensed subsidiaries impose an incremental need for capital, at a minimum to cover the incremental operating risks (including both business and event risk) inherent in the business. For a nonlicensed financial subsidiary such as a consumer finance or leasing company the need for capital will be similar to that for other regulated activities. For an unlicensed commercial subsidiary, the incremental capital will be analogous to that held by a nonfinancial firm, and will be needed to cover operating risks, including business and event risk.

economic capital sets a common standard for measuring the degree of risk taking. The standard is defined in terms of a confidence interval in the cumulative loss distribution, assessed over a common time horizon. By setting capital for different risks at the same confidence interval, the economic capital requirements for different risk factors and types of activities can be directly compared. Only by having such a common standard can one meaningfully assess the risk for a complex conglomerate with a diversity of business activities across the financial spectrum. A typical approach is to tie the degree of capital protection to the target debt rating of the institution. One convention is to adopt a one-year time horizon. In the figure above, the economic capital requirement is set to protect against losses over one year at the 99.9% level roughly equivalent to the default risk of an A rated corporate bond. Put another way, a business that holds capital to protect against one-year losses at the 99.9% confidence interval would have a default risk consistent with a A debt rating. Setting capital for the individual risks within a conglomerate at this confidence interval assures that each of them, on a standalone basis, is protected to a level consistent with the target rating. Economic capital models are typically built up from the specific analytical approaches for individual risk factors. Table 3 lists common modeling approaches for the main risk types. While the specific tools for risk factors such as credit, CAT, P&C experience and market risks differ, the common denominator in an economic capital framework is the attempt to describe the risk distribution in probabilistic terms, and set capital at a common confidence interval.

Risk Aggregation: The Building Block Approach


Just as risks can be decomposed into individual factors, they can also be re-aggregated. The distribution drawn at the bottom of
11D.571.3

al

Figure 2 shows conceptually how the various risk distributions of a financial conglomerate can be combined to yield a single, cumulative loss distribution and economic capital requirement for the institution. In practice, aggregating the various risk distributions in a complex financial organization is a challenging task and somewhat arbitrary. Ultimately, the total amount of economic risk at the top of an organization is independent of how risks are aggregated within it.27 One proposed method for constructing a composite risk picture is to follow a building block approach that aggregates risk at successive levels in an organization. There are three key levels, corresponding to the levels at which risks are typically managed: Level I: The first level aggregates the standalone risks within a single risk factor in an individual business line. Examples include aggregating the credit risk in a commercial loan portfolio; the equity risks in a life insurance investment portfolio; and the catastrophe risks in a P&C underwriting business. Level II: The second level aggregates risk across different risk factors within a single business line. Examples include aggregating the credit, market/ALM, and operating risks in a bank; or combining the asset, liability, and operating risks in P&C or life insurance.

A3.

The correlation between the positions

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

In general, the diversification benefit increases with the number of positions, decreases with greater concentration, and decreases with greater correlation. These properties are depicted in Figure 3, which shows the diversification benefits for a portfolio of equally weighted assets or liabilities for two different levels of correlation.

Level III: The third level aggregates risk across different business lines, such as banking and insurance subsidiaries. This leads to the composite picture or cumulative loss distribution at the top (holding company) level. Our analysis adopts the building block approach because it allows the risk profiles of each of the businesses within a conglomerate to be considered separately, before introducing the cross-business aggregation effects that are unique to a conglomerate. Put another way, Level I and Level II aggregation effects are already present within regulated banks and insurance companies. The unique problems of a conglomerate are associated with Level III. Differences in corporate structure, business line definitions, legal requirements, or risk management philosophy may lead some organizations to follow other hierarchies in aggregating risks than the three-level building block approach. Although this should ultimately produce the same amount of overall risk at the top of the organization, alternative approaches to risk aggregation will yield different results at lower levels.

Do cu
A1.
11D.571.3

Estimating Aggregation Effects: General Principles

The starting point for risk aggregation is to begin with standalone estimates of the economic capital required for individual risk factors. While the general concept of economic capital is explained above, the specific techniques for calculating risk at this level are beyond the scope of this paper.

Given estimates of standalone economic capital, one can aggregate risks to a first-order approximation based on the relative amounts of economic capital, and then consider correlations between risk factors. As discussed in Appendix A, the diversification benefits that accrue from aggregation are driven by three main factors: The number of risk positions (N) A2. The concentration of those risk positions, or their relative weights in a portfolio

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

As the number of positions28 increases for a given level of correlation, the diversification ratio decreases i.e., there is an increase in the diversification benefit. For a given number of positions, as the correlation decreases the diversification benefit increases. For a correlation level typically experienced in market risk around 40% the full diversification benefit amounts to about one-third; this is achieved fairly rapidly, after about 30 positions. For the much lower correlations typical of a credit portfolio around 2% the diversification benefit is significantly greater around 75%. However, it takes longer to get there; this limit is achieved after around 100 positions. Though the figure deals with equally weighted positions, as concentration increases the diversification benefit decreases. This is most easily seen at the extreme. The diversification benefit obtained by one risk factor that is nine times the size of another will be very small, even if the two risks are completely uncorrelated (independent). At zero correlation, the largest diversification benefit possible cannot exceed 10%. There is a related point that can be illustrated in Figure 3. The Capital Asset Pricing Model (CAPM) breaks total risk into specific (or idiosyncratic) and systematic risk. While the two curves represented in the graph show the systematic risk associated with average correlations of 40% and 2%, respectively, it is possible to conceive of different systematic risk curves for a given risk factor, reflecting different average levels of correlation. For example, a systematic risk curve for a domestic equities portfolio will have a higher level of average correlation and lower diversification benefit than a systematic risk curve for a globally diversified equities portfolio. Similarly, the systematic risk curve for a regionally concentrated credit portfolio will have a higher level of average correlation than a systematic risk curve for a globally diversified credit portfolio. Thus, the diversification benefit obtainable within a given risk factor will vary with the scope of activity. With these principles in mind, the general characteristics of the three levels of aggregation can be stated as follows:
41

al

Level I has many risk factors (individual assets or liabilities)

Do cu
42

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

that are neither strongly concentrated nor highly correlated. As a result, there should be a high diversification benefit at Level I. The achievable diversification benefit will be driven by the scope of activity.
Level II has fewer risk factors, and they are likely to be more

concentrated and more correlated. Consequently, there should be a lower diversification benefit at Level II than at Level I.
At Level III, there are only a few risk factors; some are likely to

Notes:

al

be much more highly concentrated than others, and correlations will tend to be high. Level III should therefore yield the smallest diversification effects of the three levels. These relationships are examined empirically in the following sections.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 9: MEASURING RISK AND VAR


Risk Measure and Risk Metric
Before discussing risk measures and risk metrics, it is useful to distinguish between the more basic notions of measure and metric. A measure is an operation for assigning a number to something. A metric is our interpretation of the assigned number. When we apply a measure, the number obtained is a measurement. Consider an example. We have someone take off his shoes and stand with his back against a wall that is marked with a scale of inches. We note the number of inches correspond to the top of the persons head. This process is a measure. The number obtained is a measurement. We interpret the number as the persons height. The interpretation is a metric. Measures are employed to quantify many things: height, temperature, aptitude, speed, consumer confidence, etc. All of these notions being quantified are metrics. The operations with which we quantify them are measures. There are many metrics of riskvolatility, delta, gamma, duration, convexity, beta, etc. We call these risk metrics. A measure that supports a risk metric is called a risk measure. The value obtained from applying a risk measure is a risk measurement. Risk measures tend to be categorized according to the risk metrics they support. There are measures of duration, measures of delta, etc. This is an important point. We do not categorize risk measures according to the specific operations they entail. Operationally, there are many different ways we might arrive at a measurement of a portfolios volatility. Irrespective of the actual operations, all of them are measures of volatility. All support a volatility risk metric. a probability distribution. With VaR, we summarize a portfolios market risk by reporting some parameter of this distribution. For example, we might report the 90%-quantile of the portfolios single-period USD loss. This is called one-day 90% USD VaR. If a portfolio has a one-day 90% USD VaR of, say, USD 5MM, it can be expected to lose more than USD 5MM on one trading day our of ten. This is illustrated in Exhibit 1. Example: One-Day 90% USD VaR Exhibit 1

Value at Risk Value-at-risk (VaR) is a category of risk measures that describe probabilistically the market risk of a trading portfolio. VaR is widely used by banks, securities firms, commodity and energy merchants, and other trading organizations. Such firms could track their portfolios market risk by using historical volatility as a risk metric. They might do so by calculating the historical volatility of their portfolios market value over a rolling 100 trading days. The problem with doing this is that it would provide a retrospective indication of risk. The historical volatility would illustrate how risky the portfolio had been over the previous 100 days. It would say nothing about how much market risk the portfolio was taking today. For institutions to manage risk, they must know about risks while they are being taken. If a trader mis-hedges a portfolio, his employer needs to find out before a loss is incurred. VaR gives institutions the ability to do so. Unlike retrospective risk metrics, such as historical volatility, VaR is prospective. It quantifies market risk while it is being taken. Measure time in trading days. Let 0 be the current time. We know a portfolios current market value . Its market value in one trading day is unknown. It is a random variable. We may ascribe it
11D.571.3

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

One-day 90% USD VaR is illustrated for a hypothetical portfolio. Shown is the probability density function for the portfolios value 1 P one trading day from now. The portfolios current value 0p is known. VaR equals the amount of money such that there is a 90% probability of the portfolio losing less than that amount over the next trading day. This is indicated in the Exhibit. VaR can be measured in other ways. For example, bank regulations require that VaR be calculated as a 99%-quantile of loss over a two-week horizon. Still other metrics are possible. We could measure VaR as the standard deviation of portfolio value or the standard deviation of portfolio return. Essentially, any parameter of the distribution of a portfolios future value can be used to measure VaR. Lets formalize this. VaR is applicable to any liquid portfolio that is, any portfolio that can reasonably be marked to market on a regular basis. Value-at-risk is not applicable to illiquid assets, such as real estate or fine art. VaR considers a portfolios performance over a specific horizona trading day, two weeks, a month, etc. We call this the VaR horizon. VaR is measured in a particular currency, USD in the example above, but any currency can be used. This is called the base currency. Finally, the portfolios market risk is summarized with a single number. Informally, we called this a parameter of the distribution of portfolio value. More formally, it is any function of both the portfolios current value and its (random) value at the end of the VaR horizon. In our example, the function was the 90%-quantile of loss. As we mentioned, other functions are possible. A VaR measurement is the value obtained for that function for a specific portfolio at a specific point in time. We distinguish between a VaR measure and a VaR metric. A VaR measure is the procedure by which we arrive at a VaR measurement. It is some computational algorithm, which is typically coded on a computer. A VaR metric is our interpretation of the VaR

al

43

measurement. In our examples, the VaR metric was one-day 90% USD VaR. Other examples of VaR metrics are:
Two-week 99% EUR VaR One-year standard deviation of USD return One-day semi-variance of JPY portfolio value

uncertain market values, which can be characterized with probability distributions. All sources of market risk contribute to those probability distributions. Being applicable to all liquid assets and encompassing, at least in theory, all sources of market risk, VaR is an all-encompassing measure of market risk. As with its power, the challenge of VaR also stems from its generality. In order to measure market risk in a portfolio using VaR, some means must be found for determining the probability distribution of that portfolios market value. Obviously, the more complex a portfolio isthe more asset categories and sources of market risk it is exposed tothe more challenging that task becomes. A VaR measure is an algorithm with which we calculate a portfolios VaR. A VaR model is the financial theory, mathematics, and logic that motivate a VaR measure. It is the intellectual justification for the computations that are the VaR measure. A VaR metric is our interpretation for the output of the VaR measure. Examples of VaR metrics are one-day 95% USD VaR or one-week standard deviation of return EUR VaR. A VaR measure is just a bunch of computations. What justifies our interpreting the output of those computations as, say, two-week 99% EUR VaR? The answer is the VaR model. The VaR model is the intellectual link between the computations of a VaR measure and the interpretation of the output of those computations, which is the VaR metric. This article focuses on VaR measures and VaR models. Conveniently, these can be discussed without regard for specific VaR metrics. The reason is that valuation of a VaR metric is the final step of any VaR measure. The real work for a VaR measure is to somehow characterize a probability distribution for a portfolios market value. Valuing a specific VaR metric based on that characterization is a final stepit is almost an afterthought. By changing that final step of a VaR measure, we can alter the VaR measure to support a different VaR metric. Accordingly, to a large extent, any VaR measure can support any VaR metric, and we can discuss VaR measures without considering the specific VaR metrics they are to support. Measure time in trading days. Let 0 be the current time. We know a portfolios current market value . Its market value in one trading day is unknown. It is a random variable. Out notation uses preceding superscripts to denote time. We find it convenient to indicate random quantities with capital letters and known constants with lower case letters. Our task is to ascribe to a probability distribution. One way that we might simplify this task is to assume some standard distribution. Doing so reduces the problem from one of estimating an entire distribution to that of estimating the handful of parameters necessary to specify that standard distribution. Depending upon the standard distribution, which is assumed, this simple approach may yield a closed formula for the portfolios VaR. For example, a normal distribution is fully described with two parameters, its mean and standard deviation. If we assume is normally distributed, then all we need do in order to measure VaR
11D.571.3

In the early 1990s, three events popularized value-at-risk as a practical tool for use on trading floors: In 1993, the Group of 30 published a groundbreaking report on derivatives practices. It was influential and helped shape the emerging field of financial risk management. It promoted the use of value-at-risk by derivatives dealers and appears to be the first publication to use the phrase value-at-risk. In 1994, JP Morgan launched its free Risk Metrics service. This was intended to promote the use of value-at-risk among the firms institutional clients. The service comprised a technical document describing how to implement a VaR measure and a covariance matrix for several hundred key factors updated daily on the Internet.

Do cu
Measuring Value at Risk
44

In 1995, the Basle Committee on Banking Supervision implemented market risk capital requirements for banks. These were based upon a crude VaR measure, but the committee also approved, as an alternative, the use of banks own proprietary VaR measures in certain circumstances. These three initiatives came during a period of heightened concern about systemic risks due to the emergingand largely unregulatedOTC derivatives market. It was also a period when a number of organizationsincluding Orange County, Barings Bank, and Metallgesellschaftsuffered staggering losses due to speculative trading, failed hedging programs or derivatives. Financial risk management was a priority for firms, and value-atrisk was rapidly embraced as the tool of choice for quantifying market risk. Financial firms, corporate treasuries, commodities merchants, and energy merchants implemented it.

Value-at-Risk (VaR) is a powerful tool for assessing market risk, but it also poses a challenge. Its power is its generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, VaR is general. It is based on the probability distribution for a portfolios market value. All liquid assets have

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Value-at-risk became popular with trading organizations during the 1990s. It was during this period that the name value-at-risk entered the financial lexicon. However, VaR measures had been used long before this. An early user was Harry Markowitz. In his groundbreaking (1952) paper Portfolio Selection, he adopted a VaR metric of single period variance of return and used this to develop techniques of portfolio optimization. In the early 1980s, the United States Securities and Exchange Commission (SEC) adopted a crude VaR measure for use in assessing the capital adequacy of broker-dealers trading non-exempt securities. A few years later, Bankers Trust implemented a VaR measure for use with its RAROC capital allocation system. During the late 1980s and early 1990s, a number of institutions implemented VaR measures to support capital allocation or market risk limits.

al

It is worth distinguishing between three concepts:

is estimate the mean and standard deviation of that distribution. (The preceding superscripts in our notation indicate that parameters are for the portfolios time 1 value conditional on information available at time 0.) Together with the normality assumption, these two parameters provide all the information necessary to value any other parameterVaR metricrelated to the distribution of . For example, if our VaR metric is one-day 95% USD VaR, we can calculate VaR as 1.645 + ( ) [1] This formula is based on the fact that the 5%-quantile of a normal distribution always occurs 1.645 standard deviations below its mean. See Exhibit 1 to understand [1] or see the article linear value-at-risk for a more detailed discussion.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Then the standard deviation of Y is given by


[4]

Formula [4] is completely general. So long as Y is a linear polynomial of the , we can use [4]. We need no other assumptions or information about the random variables . We can apply [4] to estimate the standard deviation of our portfolios market value. Suppose the portfolio has holdings in m assets. The assets accumulated market values at time 1 are random variables, which we denote . Then

Example: One-Day 95% USD VaR Exhibit 1 If we assume a portfolios value 1P is normally distributed, then we can calculate its 95% VaR with formula [1]. The 5%-quantile of a normal distribution occurs 1.645 standard deviations below its mean. Of course, losses are measured relative to a portfolios current value 0p and not its expected value . Accordingly, we must adjust 1.645 by the difference ( portfolios VaR.

Do cu
90% VaR ~ 1.282 99% VaR ~ 2.326 97.5% VaR ~ 1.960

In practice, a portfolios expected value will often be close to its current value . This is especially true over short VaR horizons, such as the one trading day horizon of our example. In this circumstance, it may be reasonable to set = . With this simplification, our formula [1] for 95% VaR becomes 1.645 [2] Based upon similar assumptions, formulas for 90%, 97.5% and 99% VaR are

Estimating the standard deviation of the portfolios market value is analogous to the task of estimating the standard deviation of portfolio return, a task you may be familiar with from portfolio theory. Except for the fact that VaR deals with market values instead of returns, we may adopt this familiar mathematics of portfolio theory for estimating VaR. We use a general result from probability. Suppose are random variables having standard deviations and correlations . Suppose another random variable Y is defined as a linear polynomial of the the : [3] Y = b1X1 + b2X2 + + bmXm + a

ww Co w.p m dfw P iza D rd. F com Tr i


), as indicated in [1], to obtain the
Copy Right: Rai University

Based on [5], we can apply [4] to obtain . All we need as inputs are standard deviations and correlations for the . These might be inferred by applying methods of time series analysis to historical price data for the assets. In some cases, this is feasible. In others, it is not. Collecting historical price data for every asset held by a portfolio may be a daunting task. A more manageable approach may be to model the portfolios behavior, not in terms of individual assets, but in terms of specific risk factors. Depending upon the composition of the portfolio, risk factors might include exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. We call the n modeled risk factors key factors. We denote their values at time 1 as . The key factors comprise an ordered set (or vector), which we call the key vector. We denote it :

In all likelihood, the number n or key factors we need to model will be substantially less than the number m of assets held by the portfolio. Selecting which key factors to model is as simpleor complex! as choosing a set of market variables such that a pricing formula for each asset held by the portfolio can be expressed in terms of those variables. That is, for each asset, there must exist a valuation function such that = ( ) [6] because the value of the portfolio is a linear polynomial of the asset values , we can now express in terms of the key factors: (7)

11D.571.3

al

45

This is a functional relationship that specifies the portfolios market value in terms of the key factors . Shorthand notation for the relationship is [8] Relationship [8] is called a portfolio mapping. The function is called the portfolio mapping function. As an example, suppose a portfolio comprises 100 shares of Dell stock, 200 shares of IBM stock and a short position of 300 shares of Microsoft stock. In this case, we would define

These issues can be understood graphically. Consider Exhibit 2. It illustrates with two graphs the situation if a portfolio mapping function is a linear polynomial. The graph on the left is of . It shows how the price of the portfolio responds linearly to changes in a single key factor . In that graph, evenly spaced values for have been mapped into corresponding values for . The resulting values of are also evenly spaced, indicating that the mapping causes no distortions. If is normally distributed, so will be . That normal distribution for is depicted in the graph on the right. Example: Linear Portolio Exhibit 2

Assuming none of the stocks goes ex-dividend during the VaR horizon, the portfolio mapping is [10]

This is a very simple portfolio mapping. A slightly more complex example is a portfolio comprising a call option on a futures contract. In this case, we define

and the portfolio mapping function is simply Blacks (1976) pricing formula for options on futures. Obviously, if a portfolio holds many complicated instruments, the portfolio mapping function will be equally complicated.

Do cu
46

The portfolio mapping function maps the n -dimensional space of the key factors to the one-dimensional space of the portfolios market value. Given a realization for , gives us the corresponding value of . That doesnt solve our problem. Were not interested in one possible realization of . We need to characterize the entire distribution of . Somehow, we need to apply the portfolio mapping function to the entire joint distribution of to obtain the entire distribution of . The question is: how? After all, beyond purporting its existence, we know very little about the portfolio mapping function . It could be some complex function with discontinuities and other inconvenient properties.

A simple solution exists if is a linear polynomial, as is the case in the above example of a portfolio with holdings in Dell, IBM, and Microsoft stock. As indicated by [10], is a linear polynomial for that example. If we assume that is normally distributed and that = , then all we need to calculate is . Given standard deviations and correlations for the , we can apply [4] to obtain . But what if is not a linear polynomial? In our example of an option portfolio, is given by Blacks (1976) option pricing formula. That is decidedly non-linear, so we cannot use [4] to obtain . Furthermore, we cannot reasonably assume that is normally distributed. Because options limit downside risk, they skew the probability distribution of . Normal distributions arent skewed.

ww Co w.p m dfw P iza D rd. F com Tr i


[11]
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

[9]

A linear mapping function is applied to a key factor 1R1. This is illustrated intuitively by mapping evenly spaced values for 1R 1 through the mapping function. The output values for 1P are also evenly spaced, indicating that the mapping function causes no distortion. Since 1R1 is conditionally normal, so is 1P. If the portfolio price function is non-linear, may not be normally distributed. This is illustrated in Exhibit 3 with a portfolio consisting of a single call option in an underlier . Example: Long Call Option Exhibit 3

A non-linear mapping function is applied to a conditionally normal key factor 1R1. The result is a conditionally non-normal portfolio value 1P. This is illustrated intuitively by mapping evenly spaced values for 1 R1 through the mapping function. The corresponding values for 1P are not evenly spaced, reflecting how the mapping function distorts the distribution of 1P. The left graph of Exhibit 3 depicts the familiar hockey stick price function for a call option. Evenly spaced values for do not map into evenly spaced values for . If is normally distributed, the resulting distribution of will not be normal. As shown on the right, it will be skewed. That skewness reflects the call options limited downside risk. Portfolios can have more complex price distributions. For example, a range forward is a long-short options position which, when applied to a short position in an underlier , behaves as illustrated in Exhibit 4.

al
11D.571.3

Example: Range Forward Hedging a Short Position Exhibit 4

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

A long-short options position can result in a bimodal distribution for 1P. In the left graph of Exhibit 4, we see that values of cluster in two regions, resulting in the dramatically non-normal price distribution shown on the right.

These three examples illustrate how linearity of can simplify the task of calculating a portfolios value-at-risk. Non-linear portfolios often exhibit unusual price distributions. These can differ markedly and in unpredictable ways from normal distributions. Such portfolios require more sophisticated modeling techniques. Here is the general problem we face in calculating value-at-risk. To calculate VaR, we need to characterize the distribution of conditional on information available at time 0. Our puzzle has two pieces

The first piece of the puzzle is the key factors . Because they are observable financial variables, historical data should be available for them. Based on this, we can characterize the joint distribution of . We may do so with standard deviations and correlations for the , or we may do so in some other manner. Our problem, then, is to convert that characterization of the distribution of into a characterization of the distribution of . On its own, our characterization of the distribution of is not enough to do this. Because it is independent of the portfolios composition, it cannot, on its own, tell us how risky the portfolio is. The second piece of the puzzle is the portfolio mapping [8] that relates to . That formula will change over time, evolving to reflect the portfolios changing composition. Formula [8] contributes to our analysis what the characterization of the distribution of does not. It reflects the portfolios composition. On its own, however, it cannot tell us how risky the portfolio is, for it contains no information relating to market volatility.

Do cu
11D.571.3

We need to combine these two pieces of the puzzle in order to estimate . Somehow we must filter the market information contained in the characterization of the distribution of through the portfolio information contained in the portfolio mapping [8].

Every VaR measure must address this problem. Accordingly, all VaR measures share certain common components related to solving this problem. All must specify a portfolio mapping. All must somehow characterize the joint distribution of . All must somehow combine these two pieces to characterize the distribution of . Exhibit 5 is a schematic summarizing these three processes that are common to all practical VaR measures. Schematic of How VaR Measures Work Exhibit 5

ww Co w.p m dfw P iza D rd. F com Tr i


Mapping procedure, Inference procedure, and Transformation procedure.

All practical VaR measures accept portfolio data and historical market data as inputs. They process these with a mapping procedure, inference procedure, and transformation procedure. Output comprises the value of a VaR metric. That value is the VaR measurement. Any practical VaR measure must include three procedures:

Recall that risk has two components: Exposure, and


Uncertainty.

By specifying a portfolio mapping, a mapping procedure describes exposure. By characterizing the joint distribution for , an inference procedure describes uncertainty. A transformation procedure combines exposure and uncertainty to describe the distribution of , which it then summarizes with the value of some VaR metric. In so doing, the transformation procedure describes risk. A mapping procedure accepts a portfolios composition as an input. Its output is a portfolio mapping function that defines as a function of . Specifying is largely an exercise in financial engineering. Since must value an entire portfolio, it can be complicated. For example, if a portfolio holds 1000 exotic derivatives will be extremely complicatedand may take hours to value, even on a computer. For this reason, a mapping procedure may employ certain approximations, called remappings, to simplify The purpose of an inference procedure is to characterize the joint probability distribution of the key vector conditional on information available at time 0. It generally accepts historical market
47

Copy Right: Rai University

al

data as an input and applies techniques of time series analysis to characterize the joint distribution conditional on information available at time 0. Techniques currently employed tend to be crude. The most common are those of uniformly weighted moving averages (UWMA) and exponentially weighted moving averages (EWMA). What is needed is time-series methods that can address conditional heteroskedasticity in high dimensions. While research is ongoing, such methods are not yet perfected. A transformation procedure combines the outputs from the mapping and inference procedures and uses them to characterize the distribution of 1P, conditional on information available at time 0. Based on that characterization, and perhaps the portfolios current value 0 p , the transformation procedure (or transformation) determines the value of the desired VaR metric. The result is the VaR measurement. Much research has focused on transformation procedures. Four basic forms of transformations are used:
Linear transformations, Quadratic transformations, Historical transformations.

and run on computers. An implemented VaR measure is a VaR implementation. VaR Metric It assumes familiarity with concepts described in the articles value-at-risk and measuring VaR. It is worth distinguishing between three concepts: A VaR measure is an algorithm with which we calculate a portfolios VaR. A VaR model is the financial theory, mathematics, and logic that motivate a VaR measure. It is the intellectual justification for the computations that are the VaR measure.

Monte Carlo transformations, and

Do cu

Linear transformations are simple and run in real time. Based on [4], they apply only if a portfolio mapping function is a linear polynomial. Quadratic transformations are slightly more complicated, but also run in real time (or near-real time). They apply only if a portfolio mapping function is a quadratic polynomial and 1R is joint-normal. Monte Carlo and historical transformations are widely applicable, but tend to run slowly (run times of an hour or more are not uncommon). Both employ the Monte Carlo method. They both generate a large number of realizations 1r [k] for 1R and value 1P for each. The histogram of realizations 1p[k] for 1 P provides a discrete approximation for the conditional distribution of 1P. From this, any VaR metric can be valued. Monte Carlo and historical transformations differ only in how they generate the realizations 1r [k]. Monte Carlo transformations generate them with pseudorandom number generators. Historical transformations draw them from historical market data. Traditionally, VaR measures have been categorized according to the transformation procedures they employ. There are: covariance, closed form, or delta normal VaR measures) measures)

ww Co w.p m dfw P iza D rd. F com Tr i


time 0; and
The portfolios current value 0p. Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Examples of VaR metrics are one-day 95% USD VaR or one-week standard deviation of return EUR VaR. A VaR measure is just a bunch of computations. What justifies our interpreting the output of those computations as, say, two-week 99% EUR VaR? The answer is the VaR model. The VaR model is the intellectual link between the computations of a VaR measure and the interpretation of the output of those computations, which is the VaR metric. Lets introduce some notation. We measure time in units equal to the length of the VaR horizon. The present time is time 0. The end of the VaR horizon is time 1. To distinguish between known quantities and random quantities, we denote the former with lowercase letters and the latter with capital letters. With this convention, we denote the portfolios current market value as 0p and its market value at the end of the VaR horizon as 1P. The preceding superscripts 0 and 1 denote time. Formally, a VaR metric is a real-valued function of:
The distribution of 1P, conditional on information available at

Standard deviation of portfolio simple return 1Z, conditional on information available at time 0, is a VaR metric

al

A VaR metric is our interpretation for the output of the VaR measure.

[1]

Linear VaR measures (other names include: parametric, variance-

Quadratic VaR measures (also called delta-gamma VaR Monte Carlo VaR measures, and Historical VaR measures.

Quantiles of portfolio loss, 1L = 0p 1P, make intuitively appealing VaR metrics. If a portfolios conditional .95-quantile of1L is USD 12.5MM, then such a portfolio can be expected to lose less than USD 12.5MM on 19 days out of 20. An example of a risk metric that is not a VaR metric is standard deviation of cash flow. Because this generally cannot be expressed as a function of 0p and the conditional distribution of 1P, it is not a VaR metric. VaR metrics can be quite elaborate. Semi-variance of portfolio return 1Z is one example. Define

This naming convention may have had unfortunate consequences. By focusing attention on the role of transformation procedures, the convention tends to downplay the important roles of mapping and inference procedures. Over the past 10 years, most VaR-related research has focused on transformations. Important research on mapping and inference procedures has lagged. To apply a VaR measure, it must be implemented in some manner. For a very simple portfolioperhaps one comprising a single asseta VaR measure might be implemented with pencil and paper. In actual trading environments, they are coded as software
48

[2]

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Then the semi-variance of 1Z is simply the variance of 1Z . Another VaR metric is expected tail loss (ETL), which is sometimes called expected shortfall. This is the average portfolio loss, assuming that the loss exceeds some quantile of loss. For example, a 90% ETL VaR metric indicates the expected loss conditional on that loss exceeding its own .90-quantile. To fully specify a VaR metric, we must indicate three things:

Then the semi-variance of 1 Z is simply the variance of 1 Z.


Another VaR metric is expected tail loss (ETL), which is sometimes called expected shortfall. This is the average portfolio loss, assuming that the loss exceeds some quantile of loss. For example, a 90% ETL VaR metric indicates the expected loss conditional on that loss exceeding its own .90-quantile. To fully specify a VaR metric, we must indicate three things: time 0 and time 1; this is the VaR horizon;
The period of time1 day, 2 weeks, 1 month, etc.between The function of 0p and the conditional distribution of 1P;

The currency in which 0p and 1P are denominated; this is the

base currency.

We adopt a convention for naming VaR metrics: The metrics name is given as the horizon, function and currency, in that order, followed by VaR. are understood to be trading days. percentage.

If the horizon is expressed in days without qualification, these If the function is a quantile of loss, it is indicated simply as a

For example, we may speak of a portfolios


2-week 95% JPY VaR, or

1-day standard deviation of simple return USD VaR, 1-week 90% ETL GBP VaR, etc.

The period of time1 day, 2 weeks, 1 month, etc.between time 0 and time 1; this is the VaR horizon;

Do cu
base currency. in that order, followed by VaR. percentage.
2-week 95% JPY VaR, or 1-week 90% ETL GBP VaR, etc.

The function of 0p and the conditional distribution of 1P;

The currency in which 0p and 1P are denominated; this is the

We adopt a convention for naming VaR metrics:

The metrics name is given as the horizon, function and currency,

If the horizon is expressed in days without qualification, these

are understood to be trading days.

If the function is a quantile of loss, it is indicated simply as a

For example, we may speak of a portfolios


1-day standard deviation of simple return USD VaR,

Notes:

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 49

al

Liquidity-adjusted VaR (LaVaR) addresses this issue by recognizing that there are limits to the rate at which a portfolio can be liquidated. To illustrate a simple LaVaR implementation, consider a situation where only b units of an instrument can be sold each day a 100unit portfolio would require n = 100/ b days to liquidate. If liquidation of the position became necessary, a possible strategy would be to liquidate b units each day and invest the proceeds at the risk-free rate (rf ). (Any positions not liquidated would remain exposed to market risk.) Under that trading strategy, b units would be exposed to market risk for one day and then invested at the risk-free rate rf for the remaining n 1 days; another b units would be exposed to market risk for two days and then invested at rf for n 2 days; another b units would be exposed to market risk for three days and then invested at rf for n 3 days; and so on. If the initial price per unit is P0 and the i th-day return is ri , the trading strategy of liquidating b units on each of n days results in a liquidated value (at the end of n days) of

Do cu
Mark to Future
50

LaVaR is then obtained by estimating the distribution of this liquidated value, similar to the way standard VaR is obtained by estimating the distribution of the mark-to-market portfolio value. Since LaVaR measures portfolio risk over an n -day horizon, it follows that LaVaR will exceed a standard VaR over a one-day horizon whenever n > 1 but will be less than standard VaR over an n -day horizon.

LaVaR can be computed using a simulation of the evolution of returns over n days and can be approximated with an analytic variance-covariance (delta-normal) approach. Standard VaR can be viewed as the second step in the evolution of risk measurement methodology.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 10: BEYOND VAR & CASH AT RISK


Here we are going to discuss the extensions to standard VaR and the techniques that have been proposed as alternatives to VaR, as well as the emergence of risk contribution measures. Extensions to Standard VaR Algorithmics Mark to Future (MtF) represents a third step. MtF is a scenario-based framework, encompassing both nonprobabilistic scenario analysis (by looking at a small set of pre-specified scenarios) and Monte Carlo simulation (by assigning probabilities to the scenarios).

Liquidity - Adjusted VaR


Standard VaR measures the riskiness of a portfolio over a fixed usually short holding period. Inherent is the implicit assumption that the risk can be eliminated by the end of the holding period, by liquidating or hedging the portfolio. In periods of market illiquidity, the implicit assumption may not be valid. Moreover, even in more normal periods, it is unlikely to be valid for all instruments.

MtF captures the passage of time by simulating the evolution of market factors and portfolio values.This permits the computation of VaR and other risk measures at various horizons. It also makes it possible to incorporate path dependency of individual instruments and portfolio values, as well as maturation of instruments, prepayments, reinvestment of cash flows, and dynamic portfolio strategies. The ability to incorporate changes in portfolios is particularly important when considering longer time horizons. Standard VaR computations assume that the portfolio is constant over time. This perspective remains relevant for derivatives dealers a one-day VaR provides a useful summary measure of risk assuming that the dealer doesnt liquidate any positions or change any hedges. However, over the longer horizons relevant for investment and portfolio management, the assumption of no changes in the portfolio makes a standard VaR less useful. Dynamic features of actual portfolios can only be captured by Monte Carlo over multiple time points that allows the user to specify rules for changing the positions as functions of factor realizations, instrument or portfolio values, or other features such as instrument deltas. This third step permits incorporation of credit risk, by including credit spreads or indexes of credit quality as factors. And, at the cost of introducing potentially large numbers of factors, credit ratings of individual obligors could be included. The default and transition probabilities can depend on other market factors, allowing the approach to capture correlations between credit migrations (including default) and other market factors, e.g. wrong way exposures. Thus, in addition to generating VaR estimates that combine market and credit risk, the MtF approach can produce estimates of potential credit exposure that reflect correlations. Since this third step permits portfolio composition to depend on market factors, MtF can also be used to examine liquidity risk. Funding liquidity risk can be measured by tracking a cash account. In the case of asset liquidity risk , including market trading volumes as market factors and specifying portfolio holdings as functions of the factors, makes it possible to simulate illiquid scenarios i.e., scenarios where portfolio liquidation requires n days. The additional risk is revealed by a VaR calculation over an n -day horizon. Further, allowing trading volumes to

al

11D.571.3

depend on the other factors makes it possible to capture the relation between liquidity risk and extreme market movements. Emergence of Risk Contribution Measures As the use of VaR has expanded from a simple communication device to play a role in managing the portfolios of banks and other financial institutions, interest has naturally focused on the decomposition of risk into its sources and to measures of the risk contributions of instruments, asset classes, and market factors. There are two main approaches to measuring risk contributions, which we call incremental and marginal. Incremental Decomposition Incremental decomposition is similar to regression analysis. Express the return r on a portfolio in terms of the changes in (or returns to) K factors, i.e.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

covariance (delta-normal) approach the term Var (w) / wi is precisely the covariance. This is zero when the position is uncorrelated with the existing portfolio, in which case the risk contribution is zero. When the correlation is positive the risk contribution is positive; when it is negative the position serves as a hedge, and the risk contribution is negative. In interpreting the risk decomposition it is crucial to remember that it is a marginal analysis. The marginal effects cannot be extrapolated to large changes, because the partial derivatives change as the position sizes change. In terms of correlations, changes in the size of a position change the correlation between the portfolio and that position. For example, if the i th position is uncorrelated with the current portfolio, the risk contribution of a small increase in the I th position is zero. However, if the size of the ith position increases, that position comprises a larger fraction of the portfolio, requiring that the correlation increase the risk contribution of the ith position increases with the position size. Alternatives to Standard VaR VaR is being called on to perform functions not imagined when it was first developed as a means of communication between trading desks and senior management. Using VaR to determine of economic or regulatory capital require the use of very high confidence levels, thereby focusing attention on the tails of the distributions of changes in market rates and prices. However, the techniques behind the standard VaR measure perform best where there is a lot of data (i.e. near the centers of distributions.) Not surprisingly, alternatives to VaR have been proposed that perform better in the tails. Extreme Value Theory It is well known that the actual distributions of changes in market rates and prices have fat tails relative to the normal distribution, implying that an appropriately fat-tailed distribution would provide better VaR estimates for high confidence levels. However, since the data contain relatively few extreme observations, we have little information about the tails. So, selecting the right fat-tailed parametric distribution and estimating its parameters are inherently difficult tasks. Extreme value theory (EVT) offers a potential solution. Loosely, EVT tells us that the behavior of certain extreme values is the same (i.e., described by a particular parametric family of distributions), regardless of the distribution that generates the data. Two principal distributions appear in EVT. The Generalized Extreme Value (GEV) distribution describes the limiting behavior of the maximum of a sequence of random variables. The Generalized Pareto Distribution (GPD) describes the tail of a distribution above some large value and thus can be used to compute the probabilities of extreme realizations exactly what is required for VaR estimates at high confidence levels. To indicate the usefulness of the GPD, suppose we were considering a random variable X (e.g., a mark-to-market loss) with distribution function F. Focusing on the upper tail, define a threshold u . The conditional distribution function F(x | X > u ) gives the conditional probability that the excess loss (i.e., X u ) is less than x given that the loss exceeds u . EVT holds that, as the

Where DFK is the change in the kth factor, bK measures the sensitivity of the portfolio return to the kth factor, and is a residual (which may be zero). For any factor model, it is possible to compute the proportion of the variance of r explained by the K factors. Analogous to the R-squared of a multiple regression, we denote this measure R2 k. To compute the risk contribution of the kth factor, we would consider the (K-1)-factor model

and compute the proportion of the variance of r explained by the K-1 factors, R 2k-1. The risk contribution of the kth factor is then R 2k R 2k-1 A limitation of incremental decomposition is that it depends on the order in which the factors are considered. While some situations might have a natural ordering of factors6, in most cases the order is less apparent. In such cases, Golub and Tilman (2000) suggested that at each step one should search over all of the remaining factors (or groups of factors) to find the one with the largest risk contribution.

Do cu
11D.571.3

Marginal Decomposition

An important property of both the standard deviation and VaR measures of a portfolio is that scaling all positions by a common factor, k, has the effect of scaling the standard deviation and VaR by the same factor, k. An implication of this scaling property of the risk measures is that the portfolio VaR can be decomposed into the risk contributions of the various positions. In particular, letting w = (w1, w2, ., wN) denote the portfolio weights on the N assets or instruments in the portfolio, the portfolio VaR can be expressed as

Not surprisingly, the risk contribution of a position depends crucially on the covariance of the return on that position with the return on the existing portfolio. In fact, in the analytic variance-

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

51

threshold u gets large, the conditional distribution function approaches the GPD given by Two principal distributions appear in EVT. The Generalized Extreme Value (GEV) distribution describes the limiting behavior of the maximum of a sequence of random variables. The Generalized Pareto Distribution (GPD) describes the tail of a distribution above some large value and thus can be used to compute the probabilities of extreme realizations exactly what is required for VaR estimates at high confidence levels. To indicate the usefulness of the GPD, suppose we were considering a random variable X (e.g., a mark-to-market loss) with distribution function F. Focusing on the upper tail, define a threshold u . The conditional distribution function F(x | X > u ) gives the conditional probability that the excess loss (i.e., X u ) is less than x given that the loss exceeds u . EVT holds that, as the threshold u gets large, the conditional distribution function approaches the GPD given by

Where e and b are parameters that must be estimated. The preceding equation implies that the conditional probability of an excess loss greater than x is approximated by

From this, the unconditional probability of a loss can be obtained as

Evidence suggests that the GPD provides a good fit to the tails of the distributions of changes in individual market rates and prices (see, Neftci (2000)); and EVT appears to be useful in measuring credit risk when there is a single important factor (Parisi (2000)). However, the available empirical evidence does not bear directly on the question of whether EVT is useful for measuring the VaR of portfolios that depend (perhaps nonlinearly) on multiple sources of risk. Classical EVT is univariate, i.e. it does not characterize the joint distribution of multiple risk factors. To apply EVT to portfolios that depend on multiple sources of risk one must estimate the distribution of P/L by historical simulation, and then fit the GPD to the tail of the distribution of P/L. We are not aware of any results on the performance of this approach. Coherent Measures of Risk

Do cu
52

VaR was criticized from the outset because it says nothing about the magnitude of losses greater than the VaR. A more subtle criticism of VaR is that it does not correctly capture the effect of diversification (even though capturing the benefits of diversification is one of the commonly cited advantages of VaR). To see this, suppose the portfolio contains short digital puts and calls on the same underlying. Each option has a notional amount of $10 million, a 4 percent probability of being exercised, time to

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

expiration equal to the VaR time horizon, and a premium of $400,000. The 95 percent confidence VaR measures of the two positions considered separately indicate no risk, because each suffers a loss with a probability of only 4 percent. However, the 95 percent VaR of the aggregate portfolio is $10 million - 2 $400,000 = $9.2 million, and the VaR of the diversified portfolio composed of one-half of each position is (1/2) ($10 million - 2 $400,000) = $4.6 million. Artzner, Delbaen, Eber, and Heath (hereafter, ADEH) (1997, 1999) argue that risk measures should be coherent i.e., they should satisfy the following four properties (where the vectors X and Y denote the possible state-contingent payoffs of two different portfolios and (X) and (X) their risk measures): A risk measure should reflect the impact of hedges or offsets; so, the risk measure of an aggregate portfolio must be less than or equal to the sum of the risk measures of the smaller portfolios which comprise it The risk measure is proportional to the scale of the portfolio, e.g. halving the portfolio halves the risk measure.

Adding a risk-free instrument to a portfolio decreases the risk by the size of the investment in the risk-free instrument. This property ensures that coherent risk measures can be interpreted as the amount of capital needed to support a position or portfolio. Note that VaR is not a coherent risk measure, because the aggregate portfolio of the digital put and call discussed above fails to satisfy property (1) while the diversified portfolio fails to satisfy the combination of (1) and (2) with a = 1/2. ADEH show that all coherent risk measures can be represented in terms of generalized scenarios. In particular, first construct a list of K scenarios of future market factors and portfolio values, as might be done in Monte Carlo simulation or deterministic scenario analysis. Second, construct a set of M probability measures on the K scenarios. These probability measures will determine how the different scenarios are weighted in the risk measure, and need not reflect the likelihood of the scenarios.10 For example, one measure might say that the K scenarios are equally likely, while another might say that the kth scenario occurs with probability one while the other scenarios have probability zero. Third, for each of the M probability measures, calculate the expected loss. Finally,

the risk measure is the largest of the M expected losses.


This seemingly abstract procedure corresponds to some widely used risk measures. For example, ADEH show that the expected shortfall measure defined by E[loss | loss < cutoff ] is a coherent risk measure. And, the Chicago Mercantile Exchanges Standard Portfolio Analysis (SPAN) methodology can be shown to be a coherent risk measure. But, coherent risk measures do not appear to be making inroads on VaR among banks and their regulators.

al

11D.571.3

A drawback of explicitly scenario-based approaches is that it is unclear how reasonably to select scenarios and probability measures on scenarios in situations in which portfolio values depend on dozens or even hundreds of risk factors. This requires significant thought, and probably knowledge of the portfolio.11 In situations with many market factors, scenario-based approaches loose intuitive appeal and can be difficult to explain to senior management, boards of directors, regulators, and other constituencies. Cash (or Cash flow) at Risk We define C-FaR (cash flow at risk) as the probability distribution of a companys operating cash flows over some horizon in the future, based on information available today. For example, if it is December 31, 2000, a companys quarter-ahead C-FaR is the probability distribution of operating cash flows over the quarter ending March 31, 2001; and its year-ahead C-FaR is the probability distribution of cash flows over the year ending December 31, 2001. These probability distributions can be used to generate a variety of summary statistics such as five percent or one-percent worst-case outcomes, thereby providing corporate CFOs with answers to questions like the following: how much can my companys operating cash flow be expected to decline over the next year if we experience a downturn that turns out to be a fivepercent tail event? While it is easy to define the concept of C-FaR, it is much more difficult to come up with a reliable C-FaR estimate for any given company. One way to see the challenges associated with constructing a C-FaR measure is to compare it with the value-atrisk (VaR) measure commonly used by banks and other financial institutions.1 Although there are some obvious differences between the two (for example, C-FaR focuses on cash flows while VaR focuses on asset values, and C-FaR looks out over a horizon of a quarter or a year while the horizon for VaR is typically measured in days or weeks), C-FaR is an attempt to create an analogue to VaR that can be useful for non-financial firms. Thus one might hope to be able to draw on the same basic methodological approach. The standard approach to estimating VaR for a bank is what might be termed a bottom up method. One begins by enumerating each of the banks assetsevery loan, trading position, and so forth. The risk exposures (to interest rate shocks, credit risk, foreign exchange movements) of each of these assets are then quantified. Finally, these risks are aggregated across the banks entire portfolio. Although far from perfect, this VaR methodology works reasonably well to the extent that (1) a bank can identify each of its main sources of risk and (2) these sources of risk correspond (either directly or indirectly) to traded assets for which there is good historical data on price movements. The method is perhaps best suited to evaluating the risks of a trading desk that deals in relatively liquid instruments. Now imagine trying to apply this same bottom up approach to a non-financial firm. For concreteness, consider the case of the computer manufacturer Dell, an example to which we will return repeatedly. How does one even begin to identify all the individual risks to Dells cash flows? And once these risks have been identified, how can they be accurately quantified? No doubt Dell faces some of the same tradeable (and hence directly

measurable) risks that a bank doesit has foreign exchange exposure, for example.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

But even if one can use standard VaR-like tools to model the quantitative impact of Dells FX exposure, this risk is likely to be second-order relative to, say, the risk that Dell does a poor job of marketing and customer support and loses significant market share to Gateway and Compaq. The bottom line is that while one canand some consultants doattempt to implement a bottomup VaR analogue to companies like Dell, there is a danger that such an approach will simply leave out some important sources of risk, badly mis-measure others, and thus lead to a highly inaccurate estimate of overall C-FaR. Given these difficulties with a bottom-up method, a natural alternative is to approach matters from the top down. That is, if the ultimate item of interest is the variability of Dells operating cash flows, why not simply look directly at their historical cash flow data? The obvious advantage of doing so is that this data should summarize the combined effect of all the relevant risks facing Dell, thereby avoiding the need to build a detailed model of the business from the ground up. Simply put, if Dells C-FaR is high, this should be manifested in a high volatility of its historical cash flows. Unfortunately, there is also a major problem with going this routelack of data. The best one can do is to get quarterly data on cash flows. Thus even if one is willing to go back say five years (it is hard to imagine going back much further for Dell, given how rapidly the company is evolving), this leaves us with only 20 observations of Dells cash flows. This is obviously an order of magnitude too few, particularly given that the goal of a C-FaR measure is to get a sense of the likelihood of extremely rare events. But what if one could identify a group of companies that are good comparables for Dell? With 25 such companies and five years worth of quarterly data on each, we would be up to 500 observations. With 50 comparable companies, we would have 1,000 observations. At this point, it would become possible to estimate five-percent (and even one-percent) tail probabilities with some confidence. As explained in detail below, we use a relatively sophisticated benchmarking technique to find the best comparables for a given target company, searching for those other companies that most closely resemble our target on four dimensions: (1) market capitalization, (2) profitability, (3) industry riskiness, and (4) stockprice volatility.] One way to gauge the usefulness of this approach is to examine the extent to which it produces plausibly different answers for companies with different characteristics. In Figure 1, we plot the one year- ahead C-FaR probability distributions for three companies: Coca-Cola, Dell, and Cygnus. (Cygnus is a $400 million market cap company engaged in the development and manufacture
53

al

of diagnostic and drug delivery systems.) For comparability, all the distributions are centered on zero and are scaled in units of earnings before interest, taxes, depreciation and amortization (EBITDA) per $100 of assets. As can be seen from the figure, the distributions for the three companies are very different. For CocaCola a five- percent worst-case scenario involves EBITDA falling short of expectations by $5.23 per $100 of assets; for Dell the corresponding figure is $28.50; for Cygnus it is $47.31.3 An obvious strength of this methodology is thatsince we are looking directly at cash flow variabilityby definition it produces the right answers on average. That is, if we run the analysis repeatedly for a number of companies, on average we will generate C-FaR estimates that are neither systematically too high nor too low. This property is not shared by any bottom-up approach. For example, if a bottom-up model ignores an important source of risk, it will produce estimates that are generally too low, perhaps substantially so. Of course, this comparables method is not without its drawbacks. Chief among these is the fact that one cannot capture company-level idiosyncrasies that might give rise to differences in C-FaR. Thus if we create a peer group of 25 companies to estimate Dells C-FaR, and Dell is in some way atypical of its peers (perhaps it has more overseas sales, and hence more FX exposure), this will not be captured. Another limitation of this approach is its inability to tell us much if anything about the expected impact of changes in a companys strategy on its CFaR. For example, if Dell decides to move farther into overseas markets, we cannot say by how much this might raise its C-FaR. The ability to model these kinds of specific company-level effects is the leading advantage of the bottom-up approach used in VaR applications.

Capital Structure Policy The classic debt-equity choice is usually framed as trading off the benefits of debt (tax shields, increased discipline on managers) against the potential costs associated with financial distress. To operationalized this tradeoff, one needs a quantitative sense of the probability of getting into distress, given a particular capital structure. Perhaps the most important determinant of this probability of distress is the variability of cash flowshence the usefulness of C-FaR. To give a concrete example of how a C-FaR measure can be used in thinking about capital structure policy, consider the case of the electricity industry. This industry, which until a few years ago was largely regulated, has been subject to enormous changes as the result of rapid deregulation. Our estimates of C-FaR for the electricity industry which we discuss in more detail below suggest that the volatility of EBITDA for the typical firm has roughly doubled from the early 1990s to the late 1990s. Against the backdrop of these large increases in volatility, an important question to ask is whether electricity companies capital structures have adjusted appropriately. In 1992, the median electricity company had interest coverage, defined as the ratio of earnings before interest and taxes (EBIT) to interest expense, of 2.81. Moreover, using the results of our CFaR analysis for the early 90s, one can show that in a five percent worst-case scenario, coverage would fall from 2.81 to 2.23a lower number to be sure, but one that still would seem to indicate more-than adequate cash flow relative to debt obligations. Thus it appears that, prior to deregulation, the capital structure of the typical electricity company did not pose a very high risk of financial distress. As of 1999, debt ratios for the industry as a whole had not changed much from their levels earlier in the decade. Indeed, the median coverage, at 2.82, was virtually identical to its value in 1992. But, with the large increase in cash flow volatility, the five percent worstcase coverage had fallen to 1.65. This suggests that the risk of financial distress in the electricity industry, though perhaps not enormous by the standards of other industries, had become significantly greater in the later part of the decade. The point here is not to say that the electricity companies current capital structures are right or wrong in any absolute sense. Rather, it is just to illustrate how a C-FaR estimate can be used in conjunction with capital structure data to help formulate debt-equity tradeoffs in a more precise, quantifiable fashion. Clearly, the same apparatus can be used to think about other closely related financial policy questions such as the appropriate level of cash reserves and credit lines. Risk Management Policy What is the value added by risk management strategies such as the use of derivatives to hedge commodity-price exposures, or the purchase of insurance policies? Do the costs of such risk management exceed the benefits? Recent research in corporate finance has shown that risk management can indeed be an important tool for creating shareholder value. But this work also stresses that the value of risk management is greater when there is a higher probability that operating cash flows will fall to the point that important strategic investments are compromised. Thus, in

An analogy may help to bring the costs and benefits of our methodology into sharper focus. Our approach to C-FaR is analogous to the common practice of estimating the value of a company by looking at the multiples (of market-to-book, price to- earnings, price-to-cash flow) at which its peers trade in the market. In contrast, the bottom-up approach used in VaR models is analogous to valuing a company by forecasting the cash flows from each of its operating assets, and then doing a discountedcash flow (DCF) calculation. Neither one of these valuation approachescomparables or DCFcan be said to strictly dominate the other; each has its strengths and weaknesses. In particular, the comparables approach to valuation will be at a comparative advantage in situations where there is not much detailed company-specific data available to make cash flow forecasts, but where there is a well-defined set of peer firms. Roughly the same can be said of our comparables approach to measuring CFaR. Notably, in the valuation arena, comparables methods are very widely used by practitioners; even when they are not relied on exclusively, they are at a minimum seen as a useful complement to bottom-up DCF methods. The remainder of the article is organized as follows. We begin in the next section by discussing why and how C-FaR can be useful in informing variety of corporate finance decisions. We then go on to describe the construction of our basic model in some detail, as well as to sketch some specific extensions and applications. WHY WOULD A COMPANY WANT TO KNOW ITS CFaR?
54

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

order to quantify the benefits of risk management, one again needs to have an accurate picture of the probability distribution of cash flows. This point can be illustrated with a simple example. Imagine a company whose capital budget for the upcoming year is $100. The company has no ability to raise finance externally, perhaps because it is too highly leveraged to take on more debt, and is also reluctant to issue new equity. The company forecasts operating cash flows of $120. Aside from these operating cash flows, it also faces a 10% probability of being hit with a product-liability suit that will cost it $10. It can buy product liability insurance at a cost of $1.10, which represents a 10% markup over the actuarially fair value. (Fair value = 10% $10 = $1). Should it buy the insurance? In Scenario A, the operating cash flows of $120 are a sure thing. In this case there is no reason for the company to buy insurance; even if it is hit with the lawsuit, it will still have $110 left, which is enough to do its planned investment. However, in Scenario B, the forecast of $120 is subject to some volatility. In particular, actual operating cash flows (before any product-liability suit) will either be $150 or $90, each with probability 50%. Now insurance is potentially valuable, because half of the time it kicks in when the company has a cash shortfall relative to its investment needs, and hence will be under investing. To be more specific, assume further that each dollar invested yields an NPV of $1.40, so that a failure to invest a dollar costs the company 40 cents in forgone value. In this case the company would be willing to pay up to $1.20 for the insurance policy. The moral of the example is simple: Holding fixed the risks to be hedged (the product liability risk, in the example), the greater are the unhedgeable background risks, as measured by C-FaR, the greater is the value of risk management. Disclosure: Managing Investors Expectations About Earnings Volatility

surprising given that there is very little unpredictable variation in depreciation and amortization a quarter or a year ahead. To allow for comparability across firms, we scale EBITDA by start-of-period book assets. We clean the data by eliminating firms with very tiny values of book assets (those in the lowest five percent of the distribution each quarter), so as to avoid situations where the ratio EBITDA/ Assets becomes unboundedly large. We also screen out firmquarters where property plant and equipment (PP&E) changes by more than 50% in a quarter. The idea here is to eliminate large mergers or other dramatic changes in a companys asset base, which are not surprises from the companys point of view, but which can potentially induce a great deal volatility in measured EBITDA/ Assets. We have experimented with setting this PP&E screen different tolerances (e.g., 20%, 30%, and 40%) and our results are essentially identical. The First-Step Forecasting Regression In order to measure how much cash flow deviates from expectations, one needs to have a forecast expected cash flow. In our case, this means we need model to forecast cash flow both a quarter into the future, as well as a year into the future. To do so, we use a very simple autoregressive specification. For our quarterly forecast, we regress EBITDA/Assets in quarter t against four lags of itself: that is, against EBITDA/ Assets in quarters t1, t2, t 3, and t4. We also add the regression quarterly dummy variables to account for possible seasonality in the data. In any quarter t, the model is fit using the past five years worth of data. Panel A of Table 1 gives an example of such regression, which is fit using data from 1991 to 1995. As can be seen, the simple autoregressive structure does a good job of forecasting the next quarters EBITDA/Assets, attaining an R2 of 58%. Indeed, we have experimented with a variety of more complicated models, and in no case were we able to do significantly better on this score. To forecast a year into the future, we use the same right-hand-side variables as before: EBITDA/ Assets in quarters t1, t2, t3, and t4, as well as the quarterly dummies. The only modification is that now the dependent variable is the sum of EBITDA in quarters t, t+1, t+2 and t+3, all divided by assets at the start of quarter t. That is, we are now forecasting the next full years worth of EBITDA. The results from this regression are shown in Panel B of Table Again, the R2 is quite high, this time reaching 63%. It is important to be clear as to the purpose these forecasting regressions. Our ultimate goal not to make more accurate quarterahead or year ahead predictions of expected cash flow than could be produced by, say, industry experts or well informed company insiders. Rather, our interest is making statements about the entire probability distribution of shocks to cash flow, particularly the tails of this distribution.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
BUILDING THE MODEL
11D.571.3

It is a fact of life that some investors, as well as analysts, are extremely concerned about volatility in reported quarterly earnings, and that this concern translates into pressure on management to meet earnings targets. By disclosing the results of a comparablesbased C-FaR analysis to investors or analysts ahead of time, a company may be able to help put earnings shocks into a credible, objective, peer-benchmarked perspective. In particular, one could make statements like the following: For other companies in our peer group, an X% deviation of quarterly earnings from expectations is not at all atypicalindeed, it occurs roughly Y% of the time. We begin by assembling quarterly income statement and balancesheet data from Compustat. In our baseline analysis, we pool together data from firms in all non-financial industries. However, our model can also be applied to individual well-defined industries in which there are enough firms. Electricity companies represent one such industry; we will review this industry-specific application later. Our basic measure of operating cash flow is earnings before interest, taxes, depreciation, and amortization (EBITDA). Alternatively, one can use earnings before interest and taxes (EBIT). The results are virtually identical in either case, which is not

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

55

But to define a shock, one needs a benchmark for what cash flow is expected to be in the absence of a shock. In other words, our shocks correspond to forecast errorsto deviations of cash flow from their expected values. Thus the cash flow forecasts we construct are not an end in and of themselves, but rather a necessary ingredient to construct these forecast errors. A concrete example may be helpful. Suppose we want to construct the quarter-ahead forecast error for company XYZ for the first quarter of 2000. We begin by taking the perspective of December 31, 1999. Using data from the prior five years (December 1994December 1999), we fit our model and make a forecast for XYZs EBITDA/Assets for the first quarter of 2000. Lets say this forecast is .05that is, our regression model predicts that XYZ will have EBITDA of $5 for every $100 of assets in the first quarter of 2000. The forecast is then compared to the actual realized value of EBITDA/Assets for XYZ in the first quarter of 2000, thereby generating a forecast error. So if XYZs actual EBITDA/Assets in the first quarter of 2000 turns out to be .04, we would say the forecast error is .01 (actual of .04 minus forecast of .05). This is the unanticipated shock to XYZs EBITDA/Assets. The procedure for calculating year-ahead forecast errors is the same, except in this case we would compare XYZs EBITDA/Assets for the full year 2000 to the value forecast as of December 1999. We repeat this procedure for every company quarter in our database. This gives us a very large pool of forecast errors. For example, even if we restrict ourselves to forecast errors from the most recent six years (1994-1999), we have roughly 85,000 observations. Sorting the Forecast Errors Based on Company Characteristics

Do cu
56

The big pool of 85,000 forecast errors represents a hodgepodge of data from all different types of companies. In order to learn something about the probability distribution of cash flows for a particular company, we want to dip into this pool and extract only those forecast errors that come from its peer group, suitably defined. In other words, we need to subdivide the 85,000 observations into sub samples, where each sub sample is composed of firms with roughly similar characteristics. To do so, we need to have an idea of what the salient firm characteristics arei.e., which characteristics matter for forecast error volatility.

After substantial experimentation, we have settled on four characteristics that seem to be most strongly associated with patterns in forecast-error volatility. The first of these is market capitalization. There is a very strong, systematic tendency for larger firms to have smaller forecast errors, most likely as a result of the fact that larger firms are better diversified. The second key characteristic is profitability, measured as the average value of EBITDA/ Assets over the prior four quarters. The third is the riskiness of industry cash low and the fourth is stock price volatility, calculated using daily stock price data over the prior quarter. We create sub samples based on these four characteristics as follows. Beginning with the full pool of roughly 85,000 forecast errors, we first sort firms into three buckets based on market capitalization. All forecast errors coming from firms in the bottom one-third of the sample by market cap in any given quarter are assigned to market-cap bucket 1, those from the middle one-third of the sample are assigned to market-cap bucket 2, and so forth. Next,

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

we further subdivide each market cap bucket into three sub-buckets according to whether an observation corresponds to a firm in the bottom, middle, or top one-third of the bucket by profitability. At this point we have nine sub-samples. Three then further subdivides these nine sub samples again, according to a measure of industry cash flow risk. Finally, we subdivide the resulting 27 sub samples once more, this time based on their stock-price volatility. When all is said and done, we have 81 bins, corresponding to three-way splits on each of four dimensions. In each of these 81 bins, there are approximately 1,000 forecast errors. The hope at this point is that within each of

To the extent that this homogeneity assumption holds true, we now have a very powerful nonparametric way to assess C-FaR for any given firm. Simply locate which of the 81 bins the firm in question belongs to, based on its current values of market cap, profitability, industry risk, and stock price volatility. Then the roughly 1,000 forecast errors in that bin can be thought of as describing the firms empirical C-FaR distribution. This procedure is how we came up with the plots for Coca-Cola, Dell and Cygnus shown in Figure 1 earlier. For example, Coca-Cola is in the top one-third of the sample with respect to market cap. Within market cap bucket 3, it is also in the top one third with respect to profitability. On the other hand, it is in the lowest third of its sub samples with respect to both industry risk and stock-price volatility. Thus overall, Coca-Cola is assigned to the bin that we denote {3, 3, 1, 1}. The plot of Coca-Colas year-ahead C-FaR distribution in Figure 1 is nothing more than the histogram of the year-ahead forecast errors in bin {3, 3, 1, 1}. A similar logic applies for Dell and Cygnus, which are assigned to bins {3, 3, 3, 3} and {2, 1, 3, 3} respectively. The empirical probability distributions of the sort shown in Figure 1 give us a great deal of flexibility. Since the data trace out the entire distribution, we do not need to rely on any assumptions about normality.10 Instead, to evaluate the five-percent tail for any given company, we simply look at the fifth percentile of the empirical distribution. Table 2 is a grid that reports the five-percent values of the C-FaR distribution, in units of EBITDA per $100 of assets, for each of the 81 bins. Panel A looks at quarter-ahead shocks, while Panel B looks at year-ahead shocks. The cells corresponding to our example companiesCoca-Cola, Dell and Cygnus have been highlighted in the table. Whether one looks at Panel A or Panel B of Table 2, several distinct patterns emerge. Smaller firms, as well as those in riskier industries or with higher stock-price volatility, all show markedly more extreme tail values. These patterns are all what one would expect, though it may be surprising just how strong they are in some cases. The effect of profitability is a bit more subtle. There is a general tendency for unprofitable firms to have riskier cash flows, which one might interpret to be a consequence of operating leverage. But this tendency does not hold across all the cells in the grid. Just to clarify the interpretation of the units in Table 2, consider the {3, 3, 3, 3} cell in the lower right-hand corner of Panel B, where

al

The 81 bins, the forecast errors come from a relatively homogeneous group of firms, matched on the characteristics that matter most.

11D.571.3

one finds the year-ahead number for Dell. This number is 28.50, which should be read as saying that, in a five-percent worst-case year, Dells EBITDA would fall short of expectations by $28.50 for every $100 of book assets that it has. For example, applying the model at the start of Dells 1999 fiscal year, when its book assets (net of cash and securities) stood at $3,696 million,11 the conclusion is that a five-percent worst-case scenario for 1999 would involve an EBITDA shortfall of $1,053 million (1,053 = 3696 .285) To get a sense of proportion, this figure can be compared to Dells actual realized 1999 EBITDA of $2,419 million.

comparable companies that the model generates for Delli.e., some of Dells peers in bin {3, 3, 3, 3}. Many of the natural suspects show up, such as Dells closest competitors Compaq, Gateway, and Micron, as well as other large, profitable high-tech companies like Cisco. But not all of the comparables are what one might have expected. For example, retailers like Bed Bath & Beyond and Williams-Sonoma make the list as well. Again, they are there because they resemble Dell in terms of market cap, profitability, and stock-price volatility. And even though they are in a quite different industry, it is one that historically has had cash flow volatility comparable to that of Dell and its high-tech brethren. Variations on the Basic Model The basic comparables approach to C-FaR that we have described can be modified in a number of ways. Here we briefly discuss a couple of examples. Customized, centered peer groups. Consider two firms, one in the 70th percentile of the market cap distribution, and one in the 90th percentile. Our baseline methodology treats these firms as identical for market-cap purposes, sticking them both in the bucket corresponding to the top one-third of the sample (i.e., the bucket for all firms above the 67th percentile). On the other hand, the 70th percentile firm goes in a completely different bucket than one in the 65th percentile, because they are on opposite sides of the cutoff. This would seem to be an arbitrary and unattractive feature. An alternative approach is to create customized, centered peer groups for any given firm we study. For example, if we are looking at a firm in the 70th percentile by market cap, we could create for it a customized market-cap bucket, such that this firm fits right into the middle of the bucket. In other words, the market-cap bucket for our 70th percentile firm would include all firms with market caps between approximately the 53rd percentile and the 87th percentile. A similar approach can be used when we sort on the other three characteristics. Although this involves re-running the model each time we look at a new firmas opposed to just picking firms out on a once-and-for-all grid of the sort shown in Table 2it arguably does a better job of creating representative peer groups. Single-industry analyses. In our baseline C-FaR model, we analyze companies from all non-financial industries jointly. But this is not necessary. Indeed, in some cases it may make more sense to look at a single well-defined industry in isolation. This is particularly true if (1) the industry has enough firms to create a decent-sized pool of forecast errors and (2) there are specific questions about this industry that cannot be answered if it is pooled together with others. Consider the case of the electricity industry. As noted above, this industry has undergone rapid deregulation over the past several years. In light of this deregulation, a natural question to ask is: how has the C-FaR of the typical electricity company changed over the course of the 1990s? To address this question, we begin by taking the roughly 100 electricity companies in SIC codes 4911 and 4931 and creating a pool of forecast errors just for them. Given that we want to see how things have changed over the past decade, we now allow for 10 years worth of forecast errors, from 19901999. In total, this yields about 3,400 forecast errors for this industry. Next, we divide our ten-year sample period into thirds corresponding to the early, mid and late 1990sand examine
57

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Assuming that $2,419 was also the forecasted value of EBITDA at the start of 1999, then we could say that in the five-percent worst case, EBITDA would fall 43.5% below expectations 1,053/ 2,419 = 43.5%).

Do cu
11D.571.3

Note that any statements that we make about EBITDA can be easily translated into statements about EBIT, or about after-tax net income. To the extent that Dell can perfectly forecast its depreciation and amortization a year ahead, its unanticipated EBIT shortfall is exactly the same as its EBITDA shortfall. And to get its net income shortfall, all one has to do is multiply by one minus the tax rate. Thus assuming a tax rate of 35%, the five-percent worst case net income shortfall is $684 million (684 = .65 1,053), which is equivalent to 41.1% of actual realized 1999 net income.

A Closer Look at Dells Peer Group At this point, our four-characteristic sorting method of creating a peer group for any company may seem like something of a black box. What kind of comparables actually comes out of this approach? As an illustration, Table 3 presents a partial list of the

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

separate C-FaR distributions for each. Panel A of Table 4 shows the year-ahead five-percent tails for each of the three sub-periods. As can be seen, this value rises from 1.80 in the early 90s to 2.11 in the mid 90s, to 3.30 in the late 90si.e., it roughly doubles over the course of the decade.

Panel B of Table 4 reports the results of this procedure implemented only on the late-1990s data. As can be seen, the general tendencies that we identified in the full Compustat sample (higher cash flow volatility among firms with low profitability and high stock-price volatility) hold true within the electricity sector as well. In particular, those firms that land in the low-profitability/ high-stock-price-volatility bin have a year-ahead five-percent tail that is roughly twice as large as the firms in any of the other bins. CONCLUSIONS We believe that our top-down, comparables based approach to estimating C-FaR offers a number of practical advantages. First and foremost, by looking directly at the ultimate item of interest cash flow variabilitythe model naturally produces estimates that, within any given peer group, are correct on average. In contrast, with a bottom-up approach, one cannot be sure that the estimates are not severely biased. Second, the model is non-parametric, and thereby avoids imposing the highly unrealistic assumption

Do cu
58

that shocks to cash flow are normally distributed. Finally, once the model is built, it can be easily and at relatively low cost applied to any number of non-financial companies. Of course, none of this is to claim that our approach dominates the alternative of building a company-specific C-FaR model from the bottom up. Rather, the two approaches can be thought of as complementary. For example, our model can be used to provide a reality check on the results produced by an in-depth bottom-up analysis. Again, the analogy to valuation practices is informative. Comparables methods are widely (though not exclusively) used by practitioners to value companies. And in spite of their inability to factor in certain types of company-specific information, it would be hard to argue that they do not represent an important part of the pragmatic persons valuation toolkit. We hope that our approach to estimating C-FaR will prove to be similarly useful.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Notes:

Another question that one might ask is: how do the C-FaR distributions vary for electricity companies with different characteristics? Naturally, we no longer have enough data to chop this much-smaller pool of forecast errors into 81 separate bins. But there is no longer any need to. Firstly, the observations are all from firms in the same industry, so there is no need to do an industry cut. Moreover, most are relatively large (as compared to the overall Compustat sample), so there is less need to sort on market cap as well. Instead, we streamline our sorting procedure so that we just do a pair of two-way sortsone on profitability and one on stock-price volatilitythereby dividing the pool of forecast errors into four sub samples.

al

LESSON 11: RISK REDUCTION THROUGH POOLING LOSSES


Chapter Objective:
Show how pooling of independent loss exposures reduces

UNIT I CHAPTER 5 DIVERSIFICATION OF RISK


Note that the pooling arrangement changes the distribution of costs paid by each person in Table 4.1; this is because the costs paid by Emily now depend on the accident losses in-curred by Samantha, and vice versa. Specifically, with pooling, the cost paid by each per-son is the average loss of the two people. . The first column of Table 4.2 lists the possible outcomes for Emily and Samantha with pooling. If neither woman has an accident, total accident costs are zero and each woman pays zero. If either of the women has an accident, total accident costs are $2,500 and each woman pays $1,250. If both women have an accident, total accident costs equal $5,000 and each pays $2,500. Now lets find the probabilities of each of these outcomes (the last column of Table 4.2). Since the losses incurred by Emily are independent of the losses incurred by Samantha, the probability that neither woman has an accident is simply the probability that Emily does not have an accident times the probability that Samantha does not have an accident. Thus, the probability of the first outcome is (0.8)(0.8) = 0.64. An analogy might help reinforce this result. Consider flipping a coin twice. The result of the second coin flip is independent of the result of the first coin flip. The probability of obtaining two heads is the probability of heads on the first coin flip times the probability of heads on the second coin flip, or (0.5)(0.5) = 0.25. You can convince yourself that this is true by noting that there are four possible outcomes: headsheads, heads-tails, tails-heads, and tails-tails. Each of these out-comes has a 0.25 probability of occurring. Table 4.1: Probability distribution of accident losses for each person without pooling
Outcomes $ 0.0 $ 2.500 Probability 0.80 0.20

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

risk.
Show how correlation in losses affects the amount of risk that

is reduced in a pooling arrangement.


Discuss how pooling arrangements provide the foundation

Risk Reduction through Polling Independent Losses The most important risk management concept may be the diversification of risk. Diversifi-cation is an essential aspect of insurance and financial markets. We analyze diversification in this chapter, highlighting the factors influencing the extent to which risk can be and is di-versified. We illustrate diversification in several different contexts, beginning with a sim-ple pooling arrangement between two people and ending with diversification among thousands of people or businesses through insurance and financial markets.

Using the probability and statistics concept reviewed in the previous chapter, we can now explain how pooling arrangements reduce risk when losses are independent (uncorrelated). Two people Pooling Arrangement Suppose that Emily and Samantha each are exposed to the possibility of an accident in the coming year. In particular, assume that each person has a 20 percent chance of an accident that will, cause a loss of $2,500 and an 80 percent chance of no accident. The probability distribution for accident losses for each woman is summarized in Table 4.1. Note that the distribution is very skewed; that is, there is a high probability of zero loss and a much smaller probability of a large loss. Also assume that Emilys and Samanthas accident losses are uncorre1ated.

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Possible Outcomes 1. Neither Samantha nor Emily has an accident. 2. Samantha has an accident but Emily does not 3. Emily has an accident but Samantha does no t 4. Both Samantha and Emily has an accident Total Cost 0 0 2500 2500 5000 1250 1250 2500

for insurance transactions and how insurers are efficient managers of pooling arrangements. Discuss other examples of diversification including stock markets.

Table 4.2: Probability distribution of accident costs paid by each woman with pooling
Average Loss Probability (0.8)(0.8) = 0.64 (0.2)(0.8) = 0.16 (0.2)(0.8) = 0.16 (0.2)(0.2) = 0.04

We want to examine what will happen if Emily and Samantha agree to split evenly any accident costs that the two might incur. That is, they agree to share losses equally, each pay-ing the average loss. This arrangement often is called a pooling arrangement (or risk pool-ing arrangement), because Emily and Samantha are pooling their resources to pay the accident costs that may occur.

Because Emily and Samantha each have a 20 percent chance of having an accident that causes $2,500 in losses, the expected costs and the standard deviation for each person with-out a pooling arrangement are as follows: Expected cost = (0.80) ($0) + (0.20) ($2,500) = $500 Standard deviation = {O.8 ($0 - $500)2 + 0.2 ($2.500 - $500)2}1/2= $1,000 Our goal is to determine how the pooling arrangement will affect the expected cost and standard deviation for each person.

Returning to the accident costs example, lets find the probability of the second and third outcomes shown in Table 4.2 (in which only one of the two women has an accident). The probability that Samantha has an accident, but Emily does not equals (0.2)(0.8) = 0.16. The probability that Emily has an accident, but Samantha does not is also 0.16. Thus, the proba-bility that only one of the women has an accident equals 0.16 + 0.16 = 0.32. The probabil-ity of the fourth outcome (both Emily and Samantha have an accident) is (0.2)(0.2) = 0.04.1

11D.571.3

Copy Right: Rai University

al

59

As can be seen clearly from this example, the pooling arrangement changes the proba-bility distribution of accident costs facing each person. The probability that Emily will have accident costs equal to $2,500 is reduced from 0.20 to 0.04. This is because in order for Emily to pay $2,500, both Emily and Samantha must experience an accident. Given that ac-cidents are independent, the probability that both Emily and Samantha will have an acci-dent is lower than the probability that only Emily (or only Samantha) will have an accident. Because the pooling arrangement reduces the probabilities of the extreme outcomes, the standard deviation (risk) of accident costs paid by both Emily and Samantha is reduced. Re-call that, without pooling, the standard deviation of accident costs in this example is $1,000. With pooling, the standard deviation of accident costs declines to $707: Standard deviation = {0.64 X ($0 - $500)2 + 0.32 X ($1,250 - $500)2 + 0.04 X ($2,500 - $500)2}1/2 = $707

While risk (standard deviation) decreases, each individuals expected accident cost again remains con-stant at $500. The probability distribution of each persons accident cost will continue to change as more people are added. Figure 4.1 compares the probability distribution for average acci-dent costs when there are 4 and 20 participants in the pooling arrangement. Note that as the number of participants in the pooling arrangement increases the probability of the extreme outcomes (very high average losses and very low average losses) go down. Stated differ-ently, the probability that average losses (the amount paid by each participant) will be close to $500, the expected loss, increases. Also, as the number of participants increases, the probability distribution of each persons cost (the average loss) becomes more bell shaped, which is, less skewed. In summary, pooling makes the amount of accident losses that each per-son must pay less risky (more predictable), because pooling reduces the standard deviation of the average loss for all the participants and thus the standard deviation of the payment by each participant. The pooling arrangement therefore reduces risk for each participant? As even more participants are added, the probability distribution would become more and more bell shaped (less skewed). Notice in all of these examples that each participant is not simply transferring risk to someone else. Instead, there is a reduction in risk for each individual. This is the beauty of risk pooling arrangements: Risk can be reduced substantially for the participants. This point is extremely important and often is not fully appreciated by students. Pooling arrange-ments reduce the amount of risk that each participant has to bear. To summarize, when losses are independent, pooling arrangements have two important effects on the probability distribution of the accident cost paid by each participant. First, the standard deviation of the average loss is reduced. As a consequence, the probability of ex-treme outcomes for participants both high and low-is reduced. Second, the distribution of average losses becomes more bell shaped. In the extreme (i.e., as the number of people in the pooling arrangement becomes very large), the standard deviation of each participants cost becomes very close to zero and the risk thus becomes negligible for each participant. This result reflects what is known as the law of large numbers. In addition, as the number of participants grows, the probability distribution of the average loss (each participants cost)becomes more and more bell shaped until it eventually equals the normal distribution, the most famous distribu-tion in all of statistics. This result reflects what is known as the central limit theorem. Finally, our examples of risk reduction through pooling arrangements have assumed that all participants have the same probability distribution. This assumption is not essential. The standard deviation of average loss also tends to decline when more and more participants with different loss distributions are added to a pooling arrangement. Furthermore, risk in principle still becomes negligible as the number of participants becomes infinitely large. Concept Checks

While both Samanthas and Emilys risk is reduced by pooling, each persons expected accident cost is unchanged by pooling. It still equals $500: Expected cost = (0.64) ($0) + (0.32) ($1,250) + (0.04) ($2,500) = $500

In summary, the pooling arrangement does not change either persons expected cost, but it reduces the standard deviation of costs from $1,000 to $707. Accident costs have become more predictable. The pooling arrangement reduces risk (uncertainty) for each individual.

Do cu
60

Pooling arrangements provide a major example of how risk is reduced through diversi-fication. Simply stated, diversification means that you do not put all your eggs in one bas-ket. By entering into a pooling arrangement, Emily and Samantha made their accident costs for the year equal the average loss for the participants. If they had not entered into the pooling arrangement, their accident costs would equal their own losses. The key point is that the average loss is much more predictable than each individuals loss. Applying the egg analog) to the pooling arrangement, (1) each woman puts half of her eggs into one basket and half into another basket, and (2) Emily carries one basket and Samantha carries the other. After reaching their destination, they divide the surviving eggs equally. Pooling Arrangement with Many People or Businesses Additional risk reduction can be obtained from pooling by adding people (or businesses) to the arrangement. To illustrate, suppose that Anne, who has the same probability distribu-tion for accident costs as Samantha and Emily, joins the pooling arrangement. At the end of the year, each woman will pay onethird of the total losses (the average loss). The addition of a third person whose losses are independent of the other two causes an additional reduc-tion in the probability of the extreme outcomes. For example, in order for Samantha to pay $2,500 in accident costs, all three individuals must experience a $2,500 loss. The probabil-ity of this occurring is (0.02) (0.02) (0.02) = 0.008. As a consequence, the standard devia-tion for each individual decreases with the addition of another participant.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

1. Explain how a pooling arrangement reduces risk for each participant when losses are un-correlated. Does pooling reduce the expected cost paid by each participant? Explain. 2. Suppose that each participant in a pooling arrangement has potential losses ranging from $0 to $4,000 and that each participants expected loss is $1,000. Using Figure 4.2 as a guide, sketch the probability distribution of average losses if the losses across partici-pants are independent and if: a. b. c. There is one participant (i.e., no pooling) There are 100 participants There are 1,000 participants

losses, and one persons unexpectedly high losses are less likely to be offset by another persons unexpectedly low losses. To illustrate, consider the effect of introducing positive correlation between Emilys and Samanthas losses. Positive correlation does not change Emilys or Samanthas initial prob-ability distribution for accident costs. We start the year knowing that the probability of an accident is 0.2 for both Emily and Samantha. Now suppose you hear later that Emily has had an accident, but you do not know whether Samantha has had an accident. What is your assessment of the probability that Samantha will have an accident? If the accidents are assumed to be independent, then your assessment will not change; the probability of Saman-tha having an accident still will be 0.2. However, if the accidents are assumed to be positively correlated, then knowing Emily has had an accident will raise your assessment of Samanthas accident probability above 0.2. Positive correlation between Emilys and Samanthas accident costs implies that the prob-ability of both women having an accident is greater than 0.04. Similarly, positive correlation implies that the probability of neither woman having an accident is greater than 0.64. Un-less we make more assumptions, we cannot specify the exact probabilities of the various outcomes. The critical point, however, is that positive correlation between Emilys and Samanthas accident costs implies that the probability of the extreme outcomes (i.e., that ei-ther both or neither will have an accident) is higher than if accident costs were independent. The maximum degree of positive correlation is perfect positive correlation. In this case, if Emily has an accident, so will Samantha, and if Emily does not have an accident, neither will Samantha. Perfect positive correlation implies that whatever happens to Emily also happens to Samantha. As a result, the probability of both women having an accident is the same as the probability that either one of them will have an accident (0.2), and the proba-bility that neither woman will have an accident is the same as the probability that one of them will not have an accident (0.8). The effect of positively correlated losses on the distribution of average losses is sum-marized in Figure 4.3. Two cases are presented. In both cases, there are 1,000 participants in the pooling arrangement and each participant has an expected loss of$500. In one case, the losses of each participant are uncorrelated; in the other case, they are positively corre-lated. As illustrated, when losses are positively correlated, the distribution of average losses has a higher standard deviation so that average losses are less predictable. Figure 4.4 further illustrates the effect of correlated losses on pooling arrangements by examining how the standard deviation of average losses changes as the number of partici-pants increases. The vertical axis measures the standard deviation of average losses. The horizontal, axis measures the number of participants in the sharing arrangement. When losses are uncorrelated, the standard deviation approaches zero as the number of partici-pants gets large (recall the law of large numbers). When losses are perfectly positively cor-related, the standard deviation of average losses does not change as the number of participants increases. Intuitively, when losses are perfectly positively correlated, there can be no risk reduction from pooling, because whatever happens to one participant happens to all other participants.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Losses across many different businesses or individuals may be positively correlated for a number of reasons. The occurrence of a loss is often due to events that are common to many people. Catastrophes, such as hurricanes and earthquakes, are examples of events that cause property losses to increase for many individuals at the same time. Consequently, losses in certain geographical regions during a given time period are positively correlated. Similarly, since epidemics can cause medical costs to increase for many people during a given time period, the medical costs across people can be positively correlated. The severity or magnitude of losses also is often influenced by common factors. For ex-ample, unexpected inflation can cause everyone who needs health care to pay more than ex-pected. The probability of receiving medical care may be independent across people (in contrast to the epidemic example), but the magnitude of the medical costs incurred by dif-ferent people is related to a common underlying factor-inflation.

Do cu
11D.571.3

How do positively correlated losses affect pooling arrangements? Intuitively, positively correlated losses imply that when one person (or business) has a loss that is greater than the expected loss, and then other people (or businesses) also will tend to have losses that are above the expected loss. Similarly, when one person has a loss that is less than the expected loss (e.g., no loss), then other people also will tend to have losses below the expected value. Thus, when losses are positively correlated, there is a greater chance that lots of people will have high losses and a greater chance that lots of people will have low losses, relative to the case of uncorrelated losses. Consequently, pooling arrangements do not decrease the stan-dard deviation of average losses as much when losses are positively correlated. Stated dif-ferently, average losses are more difficult to predict when losses are positively correlated.

To reinforce this idea, with uncorrelated losses, there is a relatively high probability that unexpectedly high losses experienced by one person will be offset by the unexpect-edly low losses of other participants. Thus, the average loss becomes very predictable. When losses are positively correlated, more participants incur similar

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Since in many instances losses will be positively correlated, we need to examine risk re-duction through pooling in this case. We will demonstrate that the essential point-that pooling arrangements reduce risk for each participant-continues to hold provided losses are not perfectly positively correlated. However, the magnitude of risk reduction is lower when losses are positively correlated than when they are independent (uncorrelated).

al

Pooling Arrangement with correlated Losses

61

Figure 4.4 also illustrates the intermediate case, where losses are characterized by less than perfect positive correlation; As can be seen, the standard deviation of average losses decreases as the number of participants increases, but the standard deviation does not ap-proach zero. The amount of risk (standard deviation) cannot be reduced as much by adding participants when losses are positively correlated; the greater the degree of correlation, the less is the reduction in risk. Concept Check 3. Sketch the probability distribution for average losses in a pooling arrangement in each of the following cases. The appearance of anyone distribution is not important; instead, the relative appearance of the distribution matters. a. The expected loss for each participant is $500, and losses for the 100 participants are independent. b. The expected loss for each participant is $500, and losses for the 100 participants are positively correlated.

pooling arrangement all have expected losses of $200. Then those participants will be reluctant to allow a person with an expected loss of $400 to join on the same terms. Thus, the pool members will want to evaluate each potential participants expected loss. The process of identifying (estimating) a potential participants expected loss is known as underwriting, and the costs of doing so are called underwriting expenses. When a participant in a pooling arrangement experiences a loss, the person must inform and seek payment from the other members. To prevent people from fraudulently claiming that a loss has occurred or exaggerating loss amounts, the pooling arrangement must mon-itor claims. The costs associated with this process usually are called loss adjustment ex-penses (or claims settlement expenses). Pooling arrangements also involve collection costs. If, for example, a particular member has a valid claim of $l0000, each participant will ul-timately have to be assessed the specified share of the $10,000 loss (e.g., $10 each if there are 1,000 members that agree to share losses equally). Alternatively, each member will have to be billed periodically for his or her share of total claim costs since the last payment. In either case, the collection of funds will involve costs in sending a bill to each member and attempting to ensure that each member pays his or her assessment. You can think of insurance companies as organizations that have emerged to reduce the costs of operating pooling arrangements. For example, without a central organization to re-cruit new members and distribute contracts (marketing and distribution), screen applicants (underwriting), monitor claims (loss adjustment), and collect assessments, each member of a pooling arrangement would need to contract with each of the other members. With 1,000 members, 499,500 separate contracts would be needed (1,000 members X 999 contracts per member -;- 2, since only one contract per pair is needed). With a central organization, only 1,000 contracts between the organization and the members are needed. In addition, without a central organization, each member would have to (1) become in-volved in underwriting each of the other members, (2) investigate each claim, and (3) indi-vidually collect assessments. These activities involve expertise that most people do not have and require considerable amounts of time. The existence of insurance companies that spe-cialize in these activities typically is efficient (i.e., it lowers costs). Ex Ante Premium Payments versus Ex Post Assessments In contrast to pure pooling arrangements, insurance companies usually do not have the legal right to assess members of the pooling arrangement (policyholders) for losses that have occurred. Instead, policyholders pay an ex ante premium-that is, prior to knowing the magni-tude of losses-without giving the insurer the right of assessment if more money is ultimately needed to pay claims (ex post). One explanation for having fixed ex ante premiums as opposed to ex post assessments is that collecting assessments from people who do not have losses is costly. Some people will attempt to delay and in some cases avoid paying assessments. Moreover, with a pure assessment system, funds might not be available to pay losses quickly. The resulting delay in claim payments would be costly to those participants that have experienced losses. Finally, assessments impose risk on participants: They do not
11D.571.3

c. The expected loss for each participant is $1,500, and losses for the 100 participants are independent. d. The expected loss for each participant is $1,500, and losses for the 100 participants are positively correlated. Insurers as Managers of Risk Pooling Arrangements As we just learned, individuals or businesses can reduce their risk by forming a pooling arrangement. As a result, risk-averse individuals 1!Pd businesses that value lower risk would have strong incentives to participate in pooling arrangements if they could be organized at zero cost. However, risk-pooling arrangements obviously are not costless to operate. Indeed, the cost of organizing and operating pooling arrangements is the main reason why insurance companies exist and why most pooling arrangements take place indirectly through insurance contracts. In essence, insurance contracts are a way of lowering the costs of operating pool-ing arrangements. Types of Contracting Costs

Consider a risk pooling arrangement like the one introduced earlier, in which Emily and Samantha agree to share losses equally. We showed how this type of pooling arrangement reduces each participants risk, as measured by the standard deviation of his or her payment, provided that losses are not perfectly positively correlated. The greater the number of peo-ple who participate in a pooling arrangement, the greater is the reduction in risk. There are, however, several important costs associated with writing and enforcing contracts among participants, which in general are referred to as contracting costs. To illustrate how insur-ance companies economize on these costs, this section describes the major types of con-tracting costs associated with pooling arrangements.

Do cu
62

Consider first the costs associated with adding participants to risk pools. In practice, risk-pooling arrangements incur substantial costs in marketing and in specifying the terms of Agreement. These costs often are called distribution costs. As discussed in Box 4.2, insur-ers employ a variety of distribution systems, including exclusive agents and independent agents and brokers. Once a potential participant in a pooling arrangement has been identi-fied, it must be decided whether to allow the individual to participate. For example, suppose that existing participants in a

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

know in advance how much they will have to contribute (although the risk still is lower than if they were not members). For these reasons, insurers commonly charge policyholders a fixed, ad-vance premium without having the right to assess policyholders for losses during the cover-age period if realized losses for the insured group turn out to be higher than expected. Fixed ex ante premiums imply that the insurer obtains revenue (premium payments) prior to paying claims. As we elaborate in the next chapter, insurers typically invest these funds in a variety of financial assets. The resulting investment earnings can be used to help pay claims when they come due. The insurers expected investment earnings reduce the premium that needs to be charged to cover the insurers ex-pected costs, all else being equal. Other examples of Diversification: Stock Markets There are many ways that people and businesses diversify risk in addition to pooling arrangements through insurance contracts. Stock markets, for example, provide a mecha-nism for entrepreneurs to share risk associated with new business ventures with other peo-ple. A share of stock entitles the owner of the share to a portion of a companys dividends. If the company does well, then dividends and/or stock prices will increase, and the owner will gain accordingly. If the company does poorly then the dividends and/or stock prices will decline, and the owner of the share will lose accordingly. Thus the owner of the share of stock shares in risk of the venture.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Most investors (including entrepreneurs) do not invest all of their wealth into one companys stock; instead, they invest small amounts of their wealth into many different stocks. This can be accomplished at low cost using a mutual fund. In this way, their wealth at the end of their investment horizon do not totally depend on the fortunes of just one company. Investing in a number of different stocks is an example of portfolio diversification, and it is recommended by almost all financial advisors. Portfolio diversification re-duces the investors risk without necessarily sacrificing expected return. The main message from the discussion of pooling arrangements with correlated losses is that positive correlation. limits the amount of risk that can be eliminated through pooling arrangements. An analogous result holds for stock portfolio (investment) diversification. Returns on different stocks are positively correlated, because all firms are affected to some degree by common factors, such as general economic conditions and interest rates. Consequently, some of the risk associated with holding stocks cannot be diversified away. That is, the positive correlation in stock returns limits the amount of risk that can be eliminated through portfolio diversification. The risk that cannot be eliminated usually is called sys-tematic risk or sometimes non-diversifiable or market risk. Notes:

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 63

al

LESSON 12: RISK REDUCTION DECISIONS


Chapter Objectives
Identify firm characteristics that influence firm decisions about

UNIT I CHAPTER 6 RISK DECISIONS


services to the insured. Thus, the savings on premium loadings depend on the insurers cost of providing these services rela-tive to the firms own costs. The savings on premium loadings also depend on the amount of profit loading that the firm can avoid paying by retaining more risk, which in turn de-pends on the insurers capital costs and ability to reduce risk through diversification and reinsurance, relative to the firms capital costs and ability to diversify risk. Potential savings in profit loadings also can depend on the degree of competition in in-surance markets. While most insurance markets are competitively structured, the market for very large limits of business insurance often involves negotiation between the corporate buyer and a group of insurers that share the risk. In these instances, it has been suggested that insurers may achieve higher expected profits than is the case where many independent insurers are competing to sell coverage. Reducing Exposure to Insurance Market Volatility Another motivation for some corporations to increase risk retention has been the desire to re-duce their vulnerability to annual swings in insurance prices due to the effects of shocks to in-surer capital on the supply of insurance and/or the insurance underwriting cycle. Loss financing decisions often are part of a long-term business strategy or plan. Once a firm de-cides to insure a particular exposure, it may be costly to change its strategy in response to an insurance price increase. This is because an immediate large increase in the amount of risk retained can increase the probability of financial distress, increase the likelihood that the firm will not have sufficient internal funds to adopt positive net present value projects, and damage relationships with customers, suppliers, or lenders. Arranging alternative loss financing, such as accumulating internal funds or establishing a captive, also can take time. As a result of these influences, the demand for insurance by individual firms often is in-elastic in the short run (i.e., comparatively unresponsive to a change in price in the short run). As a consequence, the purchase of insurance can lead to the perverse result: Even though a major purpose of purchasing insurance generally is to reduce uncertainty in cash flows, the volatility in insurance prices exposes the firm to uncertainty. When making long-term loss financing decisions, therefore, risk man-agers often view the volatility in insurance prices as a negative aspect of insurance, which leads them to increase retention. Reducing Moral Hazard You learned in Chapter 10 that deductibles and other copayments reduce moral hazard. Without these contractual provisions, expected claim costs would be higher and therefore so would insurance premiums. Consequently, when moral hazard is more of a potential problem, firms tend to retain more risk.

Firm Characteristics Affecting Risk Retention (Reduction) Decisions The previous two chapters outlined conceptual reasons why firms might find it advanta-geous to reduce risk even when the firms owners can reduce risk on their own through port-folio diversification. In short, firm-level risk affects the likelihood that a firm not only will have to raise costly external capital but also will en-counter financial distress, which in turn affects the terms at which a firm contracts with lenders, employees, suppliers, and customers. We explained that firms might reduce risk because risk reduction is required by regulation or reduces expected tax pay-ments. In this section, we use the conceptual arguments from the previous two chapters to derive implications about specific firm characteristics that are likely to influence risk reduction decisions.

Do cu
Benefits of Increased Retention
64

Risk retention refers to the decision to accept the uncertainty (variability) associated with a particular risk exposure. Conversely, risk reduction refers to the decision to reduce uncertainty (variability). Our discussion of the retention decision assumes that the alternative to retention is to reduce risk using an insurance contract. However, the points general-ize to other risk reduction methods that are discussed in subsequent chapters, such as risk reduction using derivative contracts. Potential savings to a firm from increasing retention include: (l) savings on premium load-ings, (2) reducing exposure to insurance market volatility, (3) reducing moral hazard, (4) avoiding high premiums that may accompany asymmetric information, and (5) avoid-ing implicit taxes that arise from insurance price regulation. Savings on Premium Loadings A key factor motivating additional retention is the ability to save on some of the adminis-trative expense and profit loadings in insurance premiums, thus reducing the expected cash outflows for these loadings. Specific sources of savings include lower commissions to in-surance brokers, possible savings in underwriting expenses and administrative costs of claim settlement, and savings in state premium taxes (typically 2 percent of the premium) and implicit taxes for expected guaranty fund assessments. Recall, however, that part of an insurers administrative costs are due to the provision of

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

risk retention/reduction.
Summarize evidence indicating which types of firms are more

likely to reduce risk.


Identify the variables on which a firm should focus its risk

reduction activities. Explain the advantages and disadvantages of following a disaggregated approach to risk reduction.

al

11D.571.3

Avoiding High Premiums Caused by Asymmetric Information The inability of insurers to estimate claim costs precisely for all potential buyers causes some buyers to face prices that are relatively high compared to their true, unobservable ex-pected claim costs. These buyers have an incentive to retain more risk. Higher risk buyers would have the opposite incentive (i.e., they would retain less risk to the extent that they face a lower price for insurance because they are pooled with lower risk firms). Note, however, that the reasoning We have lower expected claim costs than what the insurer thinks might be seductive and somewhat dangerous. Recall that insurers have substantial incentives to forecast costs accurately. Firms also can provide insurers with any available evidence that their expected claim costs might be lower than predicted by the insurer. Avoiding Implicit Taxes Due to Insurance Price Regulation In the case of workers compensation insurance, some states periodically have had large residual markets characterized by significant cross-subsidies from the voluntary market to the residual market. To the extent that this occurs in workers compensation or other lines of business insurance that have residual markets (e.g., commer-cial auto liability and some other types of liability coverage), any higher premiums needed to subsidize the residual market increase the incentives for firms that would be insured in the voluntary market to self-insure or otherwise increase their retention. Firms that can obtain subsidized coverage in the residual market will tend to purchase more coverage (re-tain less risk).

competitive insurance markets will provide some implicit return for the expected average time lag between the payment of premiums and claim costs. Costs of Increased Retention Increased retention obviously exposes the firm to greater risk. As you learned , increased risk can be costly for a number of reasons. For example, the greater risk from increased retention increases the probability of costly financial distress with associated ad-verse effects on lenders, employees, suppliers, and customers, which causes them to con-tract with the firm at less favorable terms. Increased retention also may require the firm to raise costly external funds and forgo some profitable investment opportunities. Moreover, increased retention may reduce expected tax shields and sacrifice possible advantages to in-surance from bundling responsibility for claims payment with claims settlement. Other things being equal, the costs associated with increased retention will vary across firms depending on the nature of their ownership and operations. Closely Held versus Publicly Traded Firms with Widely Held Stock The owners of closely held firms typically have a significant proportion of their wealth in-vested in the firm and thus are undiversified compared to shareholders of publicly traded firm so with widely traded stock. Because the owners of closely held firms are not diversi-fied, they have an incentive to retain less risk (purchase more insurance) than publicly traded firms with widely held stock. Similarly, firms that have managers who own a large amount of stock and therefore are undiversified are more likely to reduce risk. Firm Size and Correlation among Losses If a firm has a large number of independent exposures, then the law of large numbers operates at the firm level, allowing the firm to predict its average loss per exposure more ac-curately. Consequently, one major benefit of insurance-the reduction in the variability of the average loss per exposure-can also be achieved by firms with a large number of un-correlated loss exposures. Positive correlation among losses within a firm reduces the ex-tent to which firms can diversify risk internally. Consequently, other things being equal, positive correlation increases the demand for insurance (provided that insurers are able to achieve superior diversification). Larger firms with their generally larger cash flows also are better able to readily finance losses of any given size out of cash flow than are smaller firms, and they often are able to raise external funds at lower cost. Each of these influences reduces the demand for insurance by large firms. Investment Opportunities Firms that are likely to have good investment opportunities will need funds to finance those investment opportunities. These firms will be more likely to reduce risk because an unex-pected drop in cash flow can force the firm to either forgo the investment project or raise costly external capital in under to undertake the investment project. Firms that operate in growth industries and firms that require continual investment in research and development are likely to benefit from risk reduction, all else equal.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

Maintaining Use of Funds It often is argued that another advantage of retention is that the firm gets to maintain use of the funds that otherwise would be paid in premiums until claim costs are paid. Given that competitive insurance premiums will reflect the present value of expected claim costs, it is not obvious that this argument is valid. The reason is that discounting expected claim costs to present value implicitly provides insurance buyers with a return on funds paid in premi-ums until claims are paid. As explained earlier, income tax rules for insurance ver-sus self-insurance might even allow insurers to provide greater implicit after-tax returns to insurance buyers than could be obtained if buyers held the same amount of funds in similar assets to finance retained losses.

It sometimes is argued that a firm should view its opportunity cost of paying premiums as equal to its opportunity cost of capital for general investment decisions, which will ex-ceed the risk-free rate of interest due to the presence of non-diversifiable risk, whereas in-surers will discount expected claim costs at the risk-free rate (or something close to the risk-free rate). However, this argument is problematic because theory generally suggests that the rate used to discount losses should depend on the risk of losses rather than whether the firm or the insurer pays the losses. As a result, the appropriate discount rate for losses is the same for the firm and the insurer (apart from any tax considerations). At a minimum, it is important for you to recognize that premiums in

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

65

Do cu
Concept Check

A related result is that a positive (negative) correlation between losses and the rate of re-turn on new investment will reduce (increase) the ability of the firm to pursue profitable in-vestments without raising external funds, thus increasing (decreasing) the demand for insurance. The reason is that the demand for funds for new investment will tend to be high when losses are high and available internal funds are low. This case often is more applica-ble to hedging than insurance. For example, a reduction in oil prices is likely to reduce the rate of return on new investment in the exploration for oil. Firms in the oil industry will de-sire to invest less money in exploration following an oil price decline, and they will there-fore have less incentive to hedge the risk of lower oil prices. Financial Leverage Firms with higher financial leverage (ratio of debt to equity) will have a higher likelihood of financial distress, holding the probability distribution of future asset values constant. Consequently, firms with higher leverage are likely to find risk reduction more advanta-geous (and vice versa).

1. Other factors held constant, which type of firm would be more likely to fully retain (self insure) its workers compensation losses? a. A firm with an individual shareholder who owns 50 percent of the stock versus a firm in which no shareholder owns more than 1 percent of the stock. b. A trucking firm with 5,000 drivers versus a manufacturing firm with 5,000 workers at a single plant.

ww Co w.p m dfw P iza D rd. F com Tr i

Correlation of Losses with Other Cash Flows and with Investment Opportunities Firms whose losses are positively correlated with other cash inflows will have a lower stan-dard deviation of total cash flows, other things being equal, and thus will tend to retain more risk. In these cases, firms have a natural hedge: When losses tend to be high, other cash flows also tend to be high, thus reducing the likelihood of financial distress and the need for external funds. For example, if a firm has more workplace injuries when demand for its products is unexpectedly high, the increased profits due to the increase in demand will at - least partially offset the increase in worker injury costs.

For individual firms, application of the guideline that firms should retain predictable losses but insure potentially large, unpredictable, and disruptive losses depends on the spe-cific magnitude of the benefits and costs of increased retention, including managerial judg-ment about the magnitude of losses that can be tolerated without producing significant costs. For example, the point or points at which losses cease to be reasonably predictable and become potentially disruptive depends on many factors, including firm size, the cost of raising external funds, and the expected value and variability of cash flows apart from any losses. Due to special circumstances (e.g., compulsory insurance rules), retention strategies adopted by particular firms may vary substantially from this basic guideline. You also should recognize that while the underlying motives for buying insurance differ, this guideline also is applicable to risk management decisions by individuals and closely held businesses. For example, auto owners routinely choose per occurrence deductibles for automobile collision coverage by considering the trade-off between increased risk and lower premiums for policies with larger deductibles. Moreover, risk management decisions by small, closely held businesses often reflect this trade-off. Evidence on Business Risk Reduction Decisions A number of studies have examined whether various firms decisions regarding risk re-duction correspond to the factors that have been outlined above. This type of research is dif-ficult because most firms do not disclose details of their risk reduction decisions. For example, relatively few firms disclose the types and amounts of insurance they purchase. An interesting exception comes from the insurance industry. For regulatory reasons, US in-surers disclose information about their use of reinsurance. One study examined reinsurance purchases by insurers and found that insurers with owners that were not well diversified purchase more reinsurance. 1 It also

al

Product Characteristics When consumers expect future services from the provider of products and services, then the demand for those products and services will depend on con-sumers perceptions about the likelihood that the provider will be able to provide the future services. Of course, the likelihood that a firm will be able to provide futures services is in-versely related to the likelihood of bankruptcy. Consumer durables, such as electronic equipment and cars, and financial services, such as insurance, are examples of products and services for which consumer demand is likely to be especially vulnerable to consumers per-ceptions about the providers probability of bankruptcy. Thus, firms in industries such as these tend to benefit more from risk reduction than firms in industries that produce prod-ucts for which future services are not expected.

c. A firm with operating profits positively correlated with claim costs versus a firm with operating profits uncorrelated with claim costs. d. A firm with a large amount of debt in its capital structure versus a firm with no debt. A Basic Guideline for Optimal Retention The previous section highlights the basic trade-off between the benefits of increased retention through savings on explicit and implicit loadings in insurance premiums and the costs of increased uncertainty. A basic guideline for optimal retention decisions in view of this trade-off is: Retain reasonably predictable losses and insure potentially large, disruptive losses. As noted above, potentially large losses that can cause financial distress and interrupt planned investment can arise from a single event, or they can arise from a series of smaller events during a given period. For example, a company that transports chemicals may face the possibility of very large liability claims from a single accident (e.g., several hundred million dollars). It also may face large aggregate claims in a given year if it has an unex-pectedly large number of smaller claims (e.g., 50 claims averaging $3 million each). These two possibilities help explain the demand for per occurrence deductibles (or self-insured re-tentions) and stop loss provisions.

found that smaller insurers, which tend to have greater financial distress costs and greater costs of raising external capital, purchase more reinsur-ance. Thus, evidence on reinsurance is consistent with several of the reasons given in the previous chapters for why firms should reduce risk. Although firms rarely disclose specific information about their insurance purchases, they are required to disclose specific information about their use of derivative contract, which are generally used to hedge price risk. Thus, a number of studies have examined whether the use of derivative contracts corresponds to the factors outlined above. These studies generally find that larger firms are more likely to use derivatives. The most likely explanation relates to the relatively large investment in computers and knowledgeable personnel that is necessary to have a derivatives trading operation. Smaller firms are likely to find that the fixed cost of setting up an internal hedging operation exceed the benefits of reducing price risk. Some studies have found that firms with relatively greater research and development expenses are more likely to use derivatives. This finding is consistent with one of the reasons for reducing risk (hedging) discussed earlier. Firms that make large investments in research and development need funds on a consistent basis. If internal funds are not available, then these firms will have to either raise costly external capital or forgo some research and development expenditures. To ensure that internal funds are available, firms with greater re-search and development are more likely to hedge. Other research provides interesting findings about the hedging practices of gold min-ing companies operating in the United States and Canada. The primary risk faced by gold mining companies is the price of gold. When the price of gold increases, gold mining com-panies can sell their output for higher prices and thus make greater profits. Conversely, a drop in gold prices can reduce cash flows substantially and even threaten the viability of a company. This gold price risk can be hedged using derivative contracts. There is wide vari-ation in the degree to which gold mining companies actually hedge gold price risk. Some companies hedge a large proportion of their risk and others do not hedge at all. Interestingly, gold mining companies are more likely to hedge gold price risk as the managers stock ownership of the company increases. One interpretation is that managers with large undiversified ownership interests are more likely to hedge than managers with more di-versified portfolios.

such as earnings, which depends on each separate risk exposure, as well as the relationships between the various risk exposures? Traditionally, risk management has taken a disaggregated approach. Pure risk managers focused their atten-tion on individual sources of risk, such property losses, liability losses, and workers com-pensation losses. Financial risk managers focused their attention on other sources of risk, such as exchange rate risk or commodity price risk. The respective managers would attempt to reduce risk from individual exposures, without considering the interactions among the various sources of risk. Many of the arguments for why firms should reduce risk suggest a more aggregate fo-cus. For example, the progressive tax rate argument implies that firms should focus on tax-able income, which depends on many sources of risk, including property losses, exchange rates, and so on. If a more aggregate approach is adopted, then firms need to consider in-teractions between the various sources of risk. This section has two objectives: First, we highlight the level at which risk reduction would take place under each of the arguments we outlined in the previous chapters for why firms should reduce risk. Second, we discuss the advantages and disadvantages of the disaggregated versus aggregated approach to risk reduction. Advantages and Disadvantages of Disaggregation Even though many of the arguments for business risk reduction imply that the uncertainty associated with some aggregate financial variable, such as earnings, cash flow, or equity value, is ultimately what matters, uncertainty associated with these aggregate variables could be reduced by reducing the risk associated with one or more of the disaggregated vari-ables that comprise the aggregate variable. For example, earnings uncertainty might be de-creased by reducing the risk associated with any or all of the individual components of earnings (sales, raw material costs, interest, taxes, property losses, workers compensation losses, etc.). The issue addressed in this subsection is whether there are advantages of hedg-ing the aggregate variable versus a disaggregated approach; that is, hedging all the individ-ual components of the aggregate variable. A Disaggregated Approach Can Increase Transaction Costs The main disadvantage of insuring / hedging each individual risk exposure separately is that the use of separate contracts can increase transactions costs. Negotiating, writing, and purchasing insurance and derivative contracts involve transaction costs for both the supplier and the purchaser. Because there are fixed costs associated with this process, using a sin-gle contract that covers multiple sources of risk can reduce transaction costs. Bundling exposures for risk transfer purposes also can reduce proportional transaction costs, although the argument is slightly more complex than the previous argument. Suppose that a firms cash flows are subject to two sources of variability-liability risk and property risk-which are uncorrelated. The distribution for liability losses is
Liability Loss = $ 50 million $ 25 mi llion 0 million With probability of 0.02 With probability of 0.04 With probability of 0.94

Do cu

There is also evidence that firms are more likely to hedge as their financial leverage increases. One study examined the hedging practices of oil and gas producers. The output of these firms is subject to oil and gas price risk. If the price of gas decreases, then all else equal, revenues decrease. Fortunately, this risk can be hedged with derivative contracts. Among oil and gas producers that use derivatives, the extent of hedging (the proportion of expected output) increases as the firms financial leverage ratio increases. Aggregated or Disaggregated Risk Management? Assuming risk reduction is appropriate, firms must decide where to focus their risk reduc-tion activities. Should firms take a disaggregated or micro approach and hedge (insure) each individual risk exposure separately? Or, should firms hedge (insure) some aggregate or macro measure of performance,

ww Co w.p m dfw P iza D rd. F com Tr i

al

For simplicity, assume that property losses have the same distribution:
Liability Loss = $ 50 million $ 25 million 0 million With probability of 0.02 With probability of 0.04 With probability of 0.94

would pay $5 million of the liability loss and the insured would pay $20 million. Recall that the firm was willing to retain losses up to $40 million. The important point to notice is that in some cases the coverage provided by the separate contracts results in a pay-out from the insurer even though retained losses are less than $40 million. In these cases, the firm has purchased coverage that, ex post, it did not really need. We refer to this extra cover-age as unnecessary coverage, and report it in the final column. The problem with purchasing unnecessary coverage under these assumptions is that there is a positive loading associated with purchasing coverage. Thus, the unnecessary coverage is costly for the firms owners. Now suppose that the firm was able to purchase an insurance policy that would indem-nify the firm based on total losses. To achieve its objective of not having retained losses ex-ceed $40 million, the firm could use one policy under which the insurer would pay aggregate (sum of property and liability) losses in excess Of $40 million. We refer to a policy like this that bundles multiple exposures as a bundled policy. The outcomes with the bundled policy are summarized in Panel B of above table. The important point is that with the bundled policy, there is no unnecessary coverage. As indicated in the final row of Panel B, the expected claim cost for the bundled policy is $472,000, which implies a loading cost equal to $94,400 (0.2 X $472,000). This policy achieves the firms objective of not having retained losses above $40 million, but at a lower loading cost ($94,400 versus $320,000) compared to purchasing separate policies. The advantage of bundling can be illustrated using below figure. The horizontal axis in-dicates the property loss and the vertical axis indicates the liability loss. Using the same as-sumptions as the numerical example, suppose that the firm can retain losses up to $40 million, but would like coverage for aggregate losses in excess of $40 million. One way to achieve this objective is to purchase separate property and liability insurance policies, with each policy having a $20 million self-insured retention. The property policy would pay losses whenever property losses exceed $20 million, which is illustrated in Figure as the shaded area to the right of the vertical line labeled P. The liability policy would pay losses whenever losses exceed $20 million, which is illustrated in Figure as the shaded area above the horizontal line labeled L.

If the firm hedges each exposure separately, it can achieve its objective (total retained losses less than $40 million) by retaining $20 million of each exposure. In other words, the firm could purchase a liability insurance policy under which it would be reimbursed for li-ability losses in excess of $20 million and a property insurance policy under which it would be reimbursed for property losses in excess of $20 million.7 The expected claim cost on each policy equals ($30 million X 0.02) + ($5 million X 0.04) = $600,000 + $200,000 = $800,000

With a 20 percent loading, the premium on each policy would equal $800,000 in expected claim costs plus $160,000($800,000 X 0:2) in loading. Because two policies are purchased, the total loading paid by the firm would equal $320000.

Do cu
68

Panel A of above table summarizes the results of purchasing separate policies on each ex-posure. The first four columns list all the possible outcomes and the associated probabilities. For example, row two indicates that one possible outcome is that the liability loss equals $25 million and the property loss equals zero; this outcome occurs with probability 0.0376 (0,94 X 0.04). The later columns indicate the coverage provided by the separate con-tracts. For example, row two indicates that the insurer
Copy Right: Rai University

ww Co w.p m dfw P iza D rd. F com Tr i

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

To capture the idea that firms often want to avoid large losses, assume that the managers do not want total retained losses to exceed some critical value, say $40 -million (perhaps be-cause the firm would then be forced to raise costly external capital or violate a debt covenant). The firm can insure each loss exposure to achieve its objective, but assume that contracts are priced so that the firm must pay 120 percent of the contracts expected pay-out, implying a 20 percent loading or transaction cost. As you will see below, this propor-tional transaction cost can make the cost of managing each exposure separately greater than the cost of managing the bundled exposure.

11D.571.3

A bundled policy that would achieve the firms objective would pay losses whenever the sum. of property and liability losses exceeds $40 million, which is illustrated in Figure 22.1B as the area above the line labeled B. The important point to notice is that the losses paid by the bundled policy (the shaded area in Figure) are a subset of the losses paid by the separate policies (the shaded areas in Figure). The difference in the shaded ar-eas is the sum of the triangles labeled Un. Cov. (for unnecessary coverage) in Figure. Because of proportional loading costs, the purchase of unnecessary insurance cov-erage is costly to the firms owners. Moral Hazard A completely bundled policy would only have an aggregate retention level and an aggregate limit; consequently, the source of a loss would not matter for the contracts payoff. The problem with such a policy is that once a firms aggregate retention level was reached, an additional loss (up to the aggregate limit) would be covered. Such a policy therefore would greatly reduce the insureds incentive to reduce additional losses once the retention level was reached. To mitigate this moral hazard problem, per occurrence deductibles for each type of loss exposure would likely be included in any bundled policy.

large losses that could cause financial distress or cause the firm to raise costly external capital.
Several of the reasons for reducing risk imply that a firm should

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

focus its risk reduction activities on an aggregate financial variable, such as earnings, cash flow, or taxable income.
The risk of an aggregate financial variable can be reduced either

by focusing risk reduction activi-ties on the aggregate variable or by focusing on the risk of each individual component of the ag-gregate financial variable Notes:

Costs Associated with a More Complex Contract A disadvantage of bundling multiple exposures into one contract is that the parties need to have an understanding of all of the risk exposures and their correlations. The cost associ-ated with performing this analysis can increase the transaction costs relative to those that would be incurred on separate contracts for each type of exposure. In the example above, we assumed a 20 percent loading regardless of whether each exposure was insured sepa-rately or bundled together under one insurance contract. If the proportional transaction costs were higher for the bundled policy, then the benefit of bundling illustrated above would be reduced (or even eliminated). Since the number of counter-parties that will have the expertise to price a complicated bundled contract might be limited, the market for such policies could be relatively thin and less liquid. Also, those institutions that possess the modeling expertise needed to price a complicated bundled policy may not have expertise in other areas, such as loss control and claims processing, that are demanded by firms. A bundled policy therefore could result in lower quality of services. Finally, a large body of insurance contract law exists, which low-ers the transaction cost of settling coverage disputes and claims for standard policies. Un-til a similar body of law is developed for bundled policies, these transaction costs could be higher for bundled policies. Summary

Do cu
11D.571.3

Optimal risk retention/reduction decisions would consider the

costs and benefits of reducing risk.


Some of the firm characteristics that influence the benefits of

reducing risk include firm size, the correlation among loss exposures, investment opportunities, whether future services are ex-pected from the product or services produced, the correlation between losses and cash flows, and fi-nancial leverage.
A basic rule of thumb for retention decisions is to retain

relatively small, predictable loss exposures and insure against

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 69

al

Introduction In December 1994, Orange County stunned the markets by announcing that its investment pool had suffered a loss of $1.6 billion. This was the largest loss ever recorded by a local government investment pool, and led to the bankruptcy of the county shortly thereafter.

This loss was the result of unsupervised investment activity of Bob Citron, the County Treasurer, who was entrusted with a $7.5 billion portfolio belonging to county schools, cities, special districts and the county itself. In times of fiscal restraints, Citron was viewed as a wizard who could painlessly deliver greater returns to investors. Indeed, Citron delivered returns about 2% higher than the comparable State pool. See track record (Figure 1).

Do cu
70

Citron was able to increase returns on the pool by investing in derivatives securities and leveraging the portfolio to the hilt. The pool was in such demand due to its track record that Citron had to turn down investments by agencies outside Orange County. Some local school districts and cities even issued short-term taxable notes to reinvest in the pool (thereby increasing their leverage even further). This was in spite of repeated public warnings, notably by John Moorlach, who ran for Treasurer in 1994, that the pool was too risky. Unfortunately, he was widely ignored and Bob Citron was re-elected.

The investment strategy worked excellently until 1994, when the Fed started a series of interest rate hikes that caused severe losses to the pool. Initially, this was announced as a paper loss. Shortly thereafter, the county declared bankruptcy and decided to liquidate the portfolio, thereby realizing the paper loss. How could this disaster have been avoided?

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 13: CASE STUDYORANGE COUNTY CASE: USING VALUE AT RISK TO CONTROL FINANCIAL RISK
Summary
The purpose of this case is to explain how a municipality can lose $1.6 billion in financial markets. The case also introduces the concept of Value at Risk (VAR), which is a simple method to express the risk of a portfolio. After the string of recent derivatives disasters, financial institutions, end-users, regulators, and central bankers are now turning to VAR as a method to foster stability in financial markets. The case illustrates how VAR could have been applied to the Orange County portfolio to warn investors of the risks they were incurring. The Portfolio In fact, Bob Citron was implementing a big bet that interest rates would fall or stay low. The $7.5 billion of investor equity was leveraged into a $20.5 billion portfolio. Through reverse repurchase agreements, Citron pledged his securities as collateral and reinvested the cash in new securities, mostly 5-year notes issued by government-sponsored agencies. One such agency is the Federal National Mortgage Association, affectionately known as Fannie Mae. The portfolio leverage magnified the effect of movements in interest rates. This interest rate sensitivity is also known as duration. The duration was further amplified by the use of structured notes . These are securities whose coupon, instead of being fixed, evolves according to some pre-specified formula. These notes, also called derivatives, were initially blamed for the loss but were in fact consistent with the overall strategy. Citrons main purpose was to increase current income by exploiting the fact that medium-term maturities had higher yields than shortterm investments. On December 1993, for instance, short-term yields were less than 3%, while 5-year yields were around 5.2%. With such a positively sloped term structure of interest rates, the tendency may be to increase the duration of the investment to pick up an extra yield. This boost, of course, comes at the expense of greater risk. The strategy worked fine as long as interest rates went down. In February 1994, however, the Federal Reserve Bank started a series of six consecutive interest rate increases, which led to a bloodbath in the bond market. The large duration led to a $1.6 billion loss.

Value at Risk What is VAR? VAR is a method of assessing risk that uses standard statistical techniques routinely used in other technical fields. Formally, VAR is the maximum loss over a target horizon such that there is a low, prespecified probability that the actual loss will be larger. Based on firm scientific foundations, VAR provides users with a summary measure of market risk. For instance, a bank might say that the daily VAR of its trading portfolio is $35 the

al

11D.571.3

same units as the banks bottom linedollars. Shareholders and managers can then decide whether they feel comfortable with this level of risk. If the answer is no, the process that led to the computation of VAR can be used to decide where to trim risk. Questions Let us place ourselves in the position of the county Supervisors, who had to decide in December of 1994 whether to liquidate the portfolio or maintain the strategy (obviously, based on past information only). At that time, interest rates were still on an upward path. A Federal Open Market Committee meeting was looming on December 20, and it was feared that the Fed would raise rates further. To assess the possibility of future gains and losses, VAR provides a simple measure of risk in terms that anybody can understanddollars. 1. Duration approximation. The state auditor reported the effective duration of the pool as 7.4 years in December 1994. This high duration is the result of two factors: the average duration of individual securities of 2.74 years (most of the securities had a maturity below 5 years), and the leverage of the portfolio, which was 2.7 at the time. In 1994, interest rates went up by about 3%. Compute the loss predicted by the duration approximation and compare your result with the actual loss of $1.64 billion. 2. Computation of portfolio VAR. The yields data file contains 5-year yields from 1953 to 1994. Using this information and the duration approximation, compute the portfolio VAR as of December 1994. Risk should be measured over a month at the 95% level. Report the distribution and compute the VAR:

Compute the monthly volatility forecast (the square root of Var[dy(t)]) and discuss whether recent interest rates swings are explained by elevated volatility. Advanced (2) Next, we check whether the assumption of a conditional normal distribution seems adequate for changes in yields. Compute the number of exceptions at the 1-tailed 95% level, using the monthly volatility forecast just computed and the actual increase in yield. Test whether the number of exceptions is in line with what was expected. (For the exception test, you can use the normal approximation to the binomial distribution. Also, be careful to match the volatility forecast with the subsequent change in yield.) Really Advanced (Optional) The historical simulation approach assumes that changes in monthly yields have an independent, identical distribution (i.i.d.) The issue is whether this assumption is appropriate:
Consider now a model with mean-reversion in the mean, such

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

using a normal distribution for yield changes (Delta-Norma method),and using the actual distribution for yield changes (HistoricalSimulation method). Compare the VAR obtained using the two methods.

(3) Interpretation of VAR.

Do cu
11D.571.3

Convert the monthly VAR into an annual figure. Is the latter number consistent with the $1.6 billion loss? From December 1994 to December 1995, interest rates fell from 7.8% to 5.25%. Compute the probability of such an event.

It seems that both in 1994 and 1995, interest rate swings were

particularly large relative to the historical distribution. Suggest two interpretations for this observation.

Advanced (1) Compute a time-varying volatility of changes in yields using the Risk Metrics approach to see if the recent volatility is abnormally high. The exponential model (as used in Risk metrics) is: Var[dy(t)] = Var[dy(t-1)] * k + [dy(t-1)*dy(t-1)]*(1-k) where Var[dy(t)] is the conditional, predicted variance for time t and k is the decay factor, usually selected as 0.97 for monthly data. The model states that the variance forecast is a combination of the previous month forecast and of the latest squared innovation. For the starting value of the variance (at time t=0), use the average variance over the whole period.

ww Co w.p m dfw P iza D rd. F com Tr i

as the Vasicek model (if seen in the fixed-income course). Estimate the model, test whether mean reversion seems significant, and evaluate VAR in the context of this new model. Does monthly VAR change? What about annual VAR? Estimate a GARCH model for the change in yield and compare the forecasts to that of the EWMA model. On December 31, 1994, the portfolio manager decides not to liquidate the portfolio, but simply to hedge its interest rate exposure. Develop a strategy for hedging the portfolio, using (i) interest rate futures, (ii) interest rate swaps, and (iii) interest rate caps or floors. For each strategy, describe the instrument and whether you should take a long or short position. On that day, the March T-bond futures contract closed at 9905. The contract has notional amount of $100,000. Its duration duration can be measured by that of the Cheapest-To-Deliver (CTD) bond, which is assumed to be 9.2 years. Compute the number of contracts to buy or sell to hedge the Orange County portfolio.

1. Hedging.

This contract has typical trading volume of 300,000-400,000 contracts daily. Verify with recent volume data at the NSE. Would it have been possible to put a hedge in place in one day? Assuming that futures can be sold in the required amount, would the resulting portfolio be totally riskless?

Notes:

Copy Right: Rai University

al

71

Do cu
72

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 14: INTERACTIVE SESSION


Notes:

al

LESSON 15: EXPOSURE MANAGEMENT OVERVIEW


Chapter Objectives
Overview of Financial risks management Hedging with Derivatives Option Pricing Models Hedging with Future / Forward Contracts Other Derivatives Contracts

UNIT I CHAPTER 7 HEDGING WITH DERIVATIVES


RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

insolvent or even become bankruptcy. He will then not be able to fulfil the contract condition causing the sellers expected income to go into uncertainty. That risk is usually called insolvency risk or credit risk, and will last until the final payment for that particular transaction in the specific foreign currency be made.

Introduction Financial risk faced by the companies Today, the economic environment in which most firms operate is highly volatile and uncertain. One of the main factors effecting this process is the increasing market globalisation and internationalization, which is reflected in increased exchange, interest, inflation rates fluctuations as well as in high competition, demand levels etc. Consequently, the firm will be exposed to the risks, which Duma (1978) identifies as what one has on risk or, in other words, as the amount which is exposed. firms may act as buyers and sellers simultaneously on the international market. We shall begin by seeing how the above-mentioned factors influence the firms on going business, before talking about different types of risk exposure concepts. Figure No.1 (Eiteman 1995, p. 186) gives us a clear picture of how different kinds of risk are associated with the firms business transactions, based on the life span of the firms transaction from the sellers point of view.

The firm is already exposed to risk, in terms of quotation risk, before this particular business transaction begins. Quotation risk exposure is created at the moment Time 1, when the seller quotes the price, for the buyer is presented in written or verbal form. In the case of the unfavorable / favorable exchange rate change, sellers inflows in the home currency might decrease/increase. The other important point is the competitors price level, which might change as well. Both these factors might cause the tender cancellation risk. The tender price might be changed before the contract is signed resulting in a cancellation of the tender, and anticipated foreign currency inflows. This risk is usually called antenatal risk, which may not be reflected in the firms accounting numbers. At this moment, the exposure will only be estimation; neither the size, nor the time of the exposure may be known at this time. If the price, and all the other transactions conditions, fit the buyer, they will then set an order to the seller at the price agreed at Time 1. At that moment (Time 2) the backlog exposure appears and this will last until the moment when the seller ships the product to the buyer (Time 3). The risk is not usually shown at this stage in accounting numbers, but the firm already starts to include lots of costs and funds in order to generate that product. Thus, the later periods risk may influence the firms future cash flow. Then, coming up to Time 3, which is usually the point in time when most firms begin appropriate accounting records, this becomes billing exposure, which means that the customer may become insolvent
11D.571.3

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Another example is to have a detailed look on how one specific factor, such as exchange rate change, influences the firms accounting record. Oxelheim and Wihlborg (1997) have designed a model to test the effect of exchange rate change on a firms cash flow. The following example (example no. 1), based on the scenario analysis showed in their book, gives us a clear picture of how the exchange rate change might affect the firms sales volume, prices and costs, resulting in the cash flow exposure measurement explanation. Example No.1: How exchange rate change effects a firms accounting record Data: a Swedish company, which produces 100 units products in Sweden, while selling its products both in Sweden and United States, each with 50 units respectively. They have major competitors in the United States and Germany. The firm uses a marking-up pricing strategy. Ignore taxes. Basic case: Sales, 100 units Unit price=2*(COGS imported + COGS domestic + wages)

al
73

Do cu
74

ww Co w.p m dfw P iza D rd. F com Tr i


rate, and inflation rate changes? and its microeconomic environments? Exposure management overview unanticipated changes in inflation rate. risk.
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

In the second case, if no sales volume takes place, then, in comparison with the first case, the cash flow will change SEK 1,250. (41,250- 40,000). However, in reality it does not always happen like this. The exchange rate change might increase the Swedish firms sales price only in cases where competitors wont change their price, but, if they do, the Swedish firms sales volume will be effected, assuming moderate price sensitivity (price elasticity = 1), cash flow will fall by another SEK 2,062.5 (41,250*5%) and result in reduction of sales. So the total cash flow change will be SEK 812.5. (41,250-40,0002,062.5). From above analysis, we can say that todays economic environment sets much higher requirements for the financial managers than it did ten years ago. Todays financer should have excellent qualifications in order to manage market dictated risks in an appropriate way. More and more firms have to take macroeconomic environment fluctuation challenge, and try to solve such critical problems as:
How to manage risks associated with exchange rate, interest How to build effective links between firms financial strategy

Financial Risks A companys activities face different kinds of risks. In order to be able to introduce financial risks, a general definition of risk conception is needed. Risk, according to Oxelheim and Wihlborg (1997, p.18), is a measure of unanticipated changes. In our paper, we brake down every type of risk that a company might face into two groups: financial and non-financial. We will leave non-financial risks, as we are not concern with them, and concentrate on financial ones. Financial risk is the likelihood and magnitude of unanticipated changes in interest, exchange and inflation rate risks. As one might expect, financial risk might be broken down into the interest rate, exchange rate and inflations rate risks. According to Oxelheim and Wihlborg (1997, p.27- 28) the above-mentioned risks are defined in the following way:
Interest rate risk refers to the magnitude and likelihood of

unanticipated changes in interest rates that influence both the costs of different capital sources in a particular currency denomination and the demand for the product. Exchange rate risk refers to the magnitude and likelihood of unanticipated changes in exchange rate.
Inflation rate risk refers to the magnitude and likelihood of Inflation and exchange rate risk taken together gives currency

Exchange, interest and inflation changes in the market are very interrelated and usually have a high degree of correlation. The main reason why these three factors recently became of major concern is the effect they were having on the firms value. The above mentioned factors are the main causes of the companys financial risk exposure and value volatility. In other words, they might influence the companys value in a positive way, when the company is worth more than expected (upside risk), or in the

al

11D.571.3

The reader may wonder why instead of a decisive beginning using the exchange rate risk, we include such a long introduction describing all kinds of financial risks. The point is that the consisting parts of financial risk are very correlated among themselves, and often offset each other. If we had perfectly efficient markets then, according to the International Fisher Parity (IFP), exchange rates would just reflect the changes in interest rates among different currencies and exchange rate risk would be zero. In the real life, we have a lot of shifts from IFP that induce exchange rate risk. All of the above-described financial risks, currency risk and, specifically, exchange rate risk have received the most attention. As noted, most current approaches in managing these risk presume implicitly or explicitly that exchange rate variability is independent of variability of other macroeconomic factors(Oxelheim and Wilhborg, 1997, p. 28). In general, the majority of our viewed theoretical sources presumes, and believes, that every single financial risk is independent, though one should be very careful separating and calculating them. Just imagine the situation when the exchange rate between USD and SEK changed because of the lift in SEK interest rates. The negligence of interdependence between interest rate changes and exchange rate changes would cause the same exposure being measured twice. Therefore, the measurement of the effect of financial risks on companys cash flows should be made in recognition of the interdependence among them. One more reason why exchange rate risk has received particular attention is that it, more than any other financial risk, follows changes in the market and, less than the others, depends on nonmarket economy factors such as government or central bank interference. In other words, exchange rate risk is more predictable than others and therefore more manageable. Although we should emphasis that it is predictable and manageable approximately as much as the market by itself.

Do cu

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Transaction exposure, Economic exposure. Translation (=accounting) exposure, Copy Right: Rai University

Exchange, interest and inflation rates changes lead to the exchange, interest and inflation rates risks respectively, which aggregated form financial risk. Each of the financial risk additive parts is handled using a certain financial or commercial instruments. Exchange rate risk could be managed using financial (futures, forwards, options) or commercial (foreign currency cash flows maturities and amounts matching) instruments and pricing strategy. Interest rate risk is usually manageable using interest rate swaps or assets and liabilities management (ALM). The later tool might be used for the inflation rate risk management, but in the long run we believe it can be offset by the exchange and interest rate change.

Economic risk, Ankrom defines as, the sum of 1 and 2 after

eliminating double counting in inventory. The author does not cover real exchange rate movements threats.

Shapiro (1996), whose concepts we used a lot in our work, gives a series of definitions that form a good starting point. He describes: Currency risk, in general, as the degree to which a company is affected by exchange rate changes,
Accounting exposure is a measure of currency risk arising from

the need to convert the financial statements of foreign operations from local currencies to home currency; the restatement of assets, liabilities, revenues and expenses at new exchange rates will result in exchange gains and losses, the extent to which the value of the company as measured by the present value of its expected future cash flows will change when exchange rates change.

Economic exposure is another measure of currency risk based on

Shapiro subdivides economic exposure into:


Transaction exposure , which is the possibility of incurring gains

or losses, upon settlement at a future date, on transactions already entered into and denominated in a foreign currency, and together with price changes can alter the amounts and riskiness of a companys future revenue and cost streams, i.e. operating cash flows.

Real operating exposure, which arises because currency fluctuations

Another economist who tried to penetrate the same field was Buckley (1986), who classifies currency risk into:

Buckley defines the three concepts in terms similar to Shapiros with the difference in terms economic exposure, which Shapiro called operating exposure. Both authors relate economic and transaction exposure with cash flows. Though Buckley does not go as far as did Shapiro in identifying economic exposure in respect to deviations from purchasing power parity. Other writers such as Walker (1978) and Wihlborg (1980) used definitions broadly similar to those used by Shapiro and Buckley. The main difference between Kenyons (1981) and previous writers definitions is the way in which he defined financial currency risk.
75

al

negative way - the amount the companys value decreasing more than it was expected (downside risk). Not mentioning the downside risk, which lacking the right management strategy might cause financial distress, the smoothening of the upside risk gives the company value in the terms of lower taxation. Most countries have a convex corporate taxation system (Dhanini, 2000, p. 33) the higher the profit, the higher the tax percentage applicable. Therefore, during the periods when the company earns high profit, it pays higher taxes, although at times when low or even negative profit are generated no compensation is given. The main danger is the financial distress, which is very costly, and according to Copeland study: the average indirect bankruptcy cost were 17,5% (Copeland, 1999, p. 69) of the companys value one year prior to bankruptcy.

Overview of existing classifications and terminologies of financial risks In order to give a reasonable basis for our choice, as well as to provide the reader with an appropriate grasp of the topic, we will now give an overview of the existing classification and terminology of financial risks. One of the pioneers in financial risks definition process was Ankrom (1974), who first used the expressions translation, transaction and economic risks, defined as follows:
Translation risk recognize only items already on an accounting

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

balance sheet,
Transaction risk comes from future sales and purchases certain

to take place, but before the company will be able to adjust prices in line with exchange rate movements,

Changes in nominal exchange rate will influence the values of the companys existing assets, liabilities and other commercial commitments. Financial risks were subdivided into:
Trading risk = mismatch between currencies of cost and of

outsourcing trends, even these types of firm find themselves more and more related to their exchange rate risk exposure partners. Different firms have different targets to achieve, such as profit, economic value, shareholders wealth, book value. In turn, the personal managers risk attitude causes a different choice of targets. For example, if the firms target is to maximize the profit, then the manager may be more concerned about the level of profit over a particular time period. On the other hand, if the target is shareholders wealth maximization, then the manager might be more concerned about the probability of bankruptcy, in this case he might be more willing to sacrifice some level of profit in order to reduce the variability of companys value and cash flows. Since shareholders are the owners of joint venture companies, their interests should be of primary concern. This is the attitude that recently received a lot of attention in risk management literature, as well as in the joint venture companies annual reports. Since, according to financial theory, the firms value is the net present value of its future cash flow, it is emphasized much more in the firms economic value, so our exchange rate exposure analysis will be based on economic exposure calculation and management, presuming that the management is risk averse. Exchange rate exposure Lets start with a simple example, which hopefully will make the introduction of some main concepts clearer. One Swedish company that buys raw material in Germany pay in DEM, and has 90 days deferred payment. The companys main activity is in Sweden, and the biggest part of its cash inflows is in SEK. It is not difficult to realize that if the DEM suddenly and unexpectedly increases in price just before the maturity of the payment to the German supplier, the company incurs losses, as it is forced to pay more SEK than was expected for the same amount of DEM. In other words, the company is exposed to DEM price changes. We arrive at the main definition in this chapter, i.e. exchange rate exposure, which according to A.C.Shapiro (1996, p. 277) is the degree to which a company is affected by exchange rate change. Following the Shapiro way of exchange rate exposure classification in the coming chapters, we will present it, describing accounting versus economic exposure and then breaking down exchange rate exposure into translation, transaction and operating exposures providing the description of every single one of them. Accounting practice and economic reality Accounting exposure arises from the need, for purpose of reporting and consolidation, to convert the financial statements of foreign operations from local currencies (LC) involved to the home currency (HC) (Shapiro, 1996, p. 237). Big multinational companies usually have foreign subsidiaries and a lot of foreign operations. As a consequence, foreign currency denominated assets and liabilities as well as revenues and expenses take place in their values. However, the investors and the other interested part of society need values expressed in one currency in order to get a clear understanding about the companys overall financial results. Therefore, in accordance with accounting standards at the end of accounting period (quarter, year) all foreign subsidiaries values are translated in to HC. Assets and liabilities might be translated in current (post change) exchange rate and are considered to be exposed, or at historical (pre-exchange) rate,

We found that Shapiros point of view of currency risks was mainly grounded in the following reasons:

Appropriate reasoning of his deep belief in to purchasing power

parity in respect to economic exposure and; Grounded explanations showing why none of the existing accounting systems were able to reflect economic or cash flow streams;
Best grasp of the connection between the transaction exposure

and operating exposure, both of which are the key issues in exchange rate exposure calculation.

Problem Discussion Among the above-mentioned financial exposures, it is especially the exchange rate risk exposure that becomes more and more important in light of world markets globalisation and internationalization. Foreign exchange exposure (FEE) comes from the international trade and financial activities, such as foreign loans, guarantees etc. As an example, one big multinational company buys its raw material in the domestic market and sells its final product in both domestic and foreign markets. Assume that the situation in the markets changes, and as a consequence the foreign currency becomes cheaper in relation to the domestic one. What will happen to such a company? If the company cant increase the price, its products to be sold in the foreign market, will generate less income than earlier, because the domestic currency as well as the final product, will become more expensive in comparison with the foreign currency and prices level. Following the same logic, it is not difficult to realize, that the foreign competitors of our company will get the competitive advantage, being able to offer the lower price for the same product in our domestic market. Therefore, the company will incur double losses: it will lose part of the domestic market and part of the foreign market. Not only big multinational companies, but also small firms having only domestic trade operations, become increasingly dependant on the world market main currencies fluctuations. With common

Do cu
76

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

sale,
Balance sheet risk= mismatch between assets and liabilities in a

given currency. Kenyon (1990) was citing his previous book as follows: Kenyon further suggested that any of these financial risks could be viewed and managed either in accounting terms, i.e. as accounting or translation risk, or in cash terms, i.e. as transaction risks, but that these two concepts referred to different ways in which management looks at the same risks, rather than two different risks. One contrast between Shapiro and Kenyons (1981) classifications stands out: Shapiro regards the main division as being between the accounting model and the economic or cash flow model, whereas Kenyon (1981) gives primacy to the contrast between risks from the real and nominal exchange rates, a contrast also stressed by Shapiro.

al

11D.571.3

and are regarded as not exposed. In some literature accounting exposure is named as translation exposure, thats why we would like to stress that that it is the same thing and we are introducing both terms, not to confuse the reader, but to give the full overview of the terminology used in the different s o u r c e so ft h el i t e r a t u r e . Translation exposure is simply the difference between exposed assets and exposed liabilities (Shapiro, 1996, p. 238). The difference between exposed assets and exposed liabilities are increasing or decreasing companys earnings and are reposted as foreign exchange gains or losses. There are four different translation methods: current/non current, monetary/non-monetary, temporal and current rate methods. Economic exposure is based on the extent to which the value of the firm - as measured by the present value of its expected future cash flows - will change when exchange rates change (Shapiro, 1996, p. 277). Economic exposure measurement is based on the companys all future cash flows while accounting contains only part of them. Moreover, accounting numbers are not adjusted to reflect the distorting effect of inflation and relative price changes on their associated future cash flows. Economic exposure, in turn, might be able to be separated into operating and transaction exposures. Accounting measures of exposure focus on the effect of currency changes on previous decisions of the firm, as reflected in the book values of assets acquired and liabilities incurred in the past. However, book values (which represent historical cost) and market values (which reflect future cash flows) of assets and liabilities typically differ. Therefore, retrospective accounting techniques, no matter how refined, cannot truly account for the economic (that is, cash flow) effects of a devaluation or revaluation in the value of a firm because these effects are primarily prospective in nature. Basing on this, more and more companies are starting to rely on economic exposure measurement. Although it is hard work to persuade the person, who may have been basing his decisions on accounting numbers for the past 30 years, that there is a better way of doing the same things, but that is the objective, and hopefully with our thesis we will also contribute to it.

Among the translation methods, the most popular internationally are monetary/non-monetary, current and current/non-current methods. Under the monetary/non-monetary method, monetary balance sheet items (cash, bank-holdings, most claims and debts) are translated at the closing date, non-monetary balance sheet items (inventories, machine, real estate) at historical (the rate applying when the asset was acquired). According to this method only monetary items are supposed to be exchange rate exposed. Under the current method all assets and liabilities on the balance sheet are translated at the closing of the accounts rate. Under current method all asset position are supposed to be exchange rate exposed. According to the third current/non-current method current assets and short-term debt of the balance sheet of foreign subsidiaries are translated at the closing rate, while fixed assets and ling term debt at historical rate. One should take into account that different translation methods result in different translation exposure. For example, the monetary/nonmonetary method always yields a more positive result that the current method in any year during which a foreign currency has been devaluated. Looking from the economic point of view, translation exposure is less important in comparison with the other two mentioned, because translation losses are only book losses while operating and transaction are expected and real cash losses respectively. Operating exposure Operating exposure, in some sources of literature also called as economic exposure, competitive exposure, or strategic exposure, measures the change in the present value of the firm resulting from any change in the future operating cash flows of the firm caused by an unexpected change in the exchange rates. So, sometimes it might be called cash flow exposure. The changed value depends on the effect of the exchange rate change of future sales volume, prices, or costs. Although in order to have a clear distinction between operating and economic exposures in our paper, we define operating and transaction exposures we define as consisting parts of economic exposure. Operating exposure of the firm requires forecasting and analysing all of the firms future individual transaction exposures together with the future exposures of all of the firms competitors and potential competitors worldwide. Cash flow Usually, from accounting point of view, cash flow or net cash flow means the difference between contracted cash inflows and cash outflows, although accounting cash flow definitions varies as follows:
The total receipts minus payments. Net profit before depreciation within some specific periods. A measure of the companys ability to fund its capital

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

Translation Exposure The best definition of Translation exposure we found in Eitemans book (1997, p. 187). The later follows Shapiros point of view and states that translation exposure also called accounting exposure, is the potential for accounting derived changes in owners equity to occur because of the need to translate foreign currency financial statements of foreign affiliated into a single reporting currency to prepare worldwide consolidated financial statements.

Translation exposure can be seen as a measure of a latent risk. In the short term, translation gains or losses on exposure have no cash flows effects, i.e. they are not realized over the reporting period. Cash flow gains and losses occur, however, if the company is liquidated, or in the future when assets and liabilities produce cash flows. Thus, ideally, translation exposure should capture the sensitivity of economic value, in a form of either liquidation value or present value of future cash flows, to exchange rate changes.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

expenditure and debt repayment out of its own resources which equal the net profit adding back the depreciation then plus or minus changes in inventories, receivables and payables. From the economic point of the view cash flow was defined in a following way:

al

77

The total net dividends, which can be paid to the stockholders

over future years.


Any available cash for stockholders i.e. earnings before

+/- Income from financial instruments such as: forward, swap, option. Usually, there are two ways to examine the firms cash flow exposure, scenario analysis and regression analysis. The scenario analysis is more related to the fundamental factors such as sales volumes, prices, and costs effect on the firms cash flow. Another method is to use historical data to forecast future effects, which is termed regression analysis. Two different variables need to be taken into account. One is an independent variable, which can be the exchange rate, interest rate values during a certain period or any other that is likely to affect the companys cash flows. Another one is dependent variable, which could be anything that the company is concerned about, such as revenues, book value, market value and so on. Oxelheim and Wihlborg (1997) have presented a good example to show how the effect of exchange rate change on a firms cash flow. Transaction exposure In his excellent book, Currency risk management, Alfred Kenyon uses the metaphors conception, birth, anniversaries, death to describe the life cycle of transaction exposure. Conception concerns the major price quotation problem area - when we commit ourselves to the mismatch. Birth is the moment when the contract is signed and the exposure becomes certain; when the commitment becomes a commercial or contractual reality, it has ceased to be unilateral. Anniversaries refer to the covering of the risk; any annual reporting dates at which interim gains or losses may be ascertained. Death refers to when settlement is made - the end of the exposure when we are free to convert the receipt or payment into the other currency and thus measure the final cash gain or loss. So, using Donaldson (1980)s words, transaction exposure can be explained as revenues in nature and exist for relative short periods. He says that a sale from seller to buyer in another currency must be in the currency of, at best, one of them, and the another one has an exposure, but only when there is a period of delay in payment for the goods, and most transaction exposure arise from the granting of credit. Therefore, summing up, transaction exposure arises from:
Purchasing or selling goods or services whose prices are stated

Let us take a look what firms cash flow statement looks like from the economic point of view.

Do cu
Commercial cash flows: +/- Change in accounts receivable +/- Change accounts payable Financial cash flows:
78

Usually, a multinational firms cash flow statement is divided into two parts, i.e. operating cash flow and financial cash flow. Example No.2 shows what the firms cash flow statement looks like. Operating cash flow results from accounts receivable, accounts payable, rent, lease payment for the use of facilities and equipment, royalties and license fees for the use of technology and intellectual property, as well as assorted management fees for services provided, which could appear between unrelated company and subsidiary of the firm. Financial cash flows are payments for the use of loans (principal and interest), stockholder equity (new equity investment and dividends) and firms financial instruments such as forward contract, option swap etc. Each of these cash flows can occur at different time intervals, in different amounts, in different currencies of denomination, and may have a different predictability of occurrence. Example No.2 Firms cash flow statement

+ Sales revenues (domestic and foreign subsidiaries)

- Costs of goods (domestic and foreign subsidiaries) - Wages and salaries (domestic and foreign subsidiaries) - Rent, lease payment etc. administration expenses (domestic and foreign Subsidiaries) - Depreciation (domestic and foreign subsidiaries)

+/- Change in loan amount (principal and interest) +/- Change in stockholder equity value (new investment and dividends)
Copy Right: Rai University

ww Co w.p m dfw P iza D rd. F com Tr i


contract, and foreign currencies.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

depreciation minus capital expenditure minus increases in working capital.


The future earnings of the firm including the overseas

subsidiaries. It is much more similar to the accounting concept 2, when we speak about the cash flow later, it always will be referred to this concept. Cash flow exposure Cash flow exposure, might be defined as the extent to which the present value of a firms future cash flow is changed by a given currency appreciation or depreciation. In general, it arises because of currency fluctuations, in combination with price changes, which alter the amounts and riskiness of a companys future revenue and cost streams. As we can see, the cash flow exposure has a multidimensional effect, involving the interaction among the firms strategy in financing, marketing and production. The firms cash flow in the future will depend on its competitive ability. The later exposure computation requires a long term prospective, viewing the firm as an ongoing concern with operations whose cost and price competitiveness could be affected by exchange rate change.

in foreign currencies in credit, Borrowing or lending funds when repayment is to be made in foreign currency,
Being a party to an unperformed foreign exchange forward Otherwise acquiring or incurring liabilities denominated in

Now we will go back to the example that we present in the beginning of this chapter to explain how those exposures fit into the firms transaction life span. Below (figure No. 2), we present a basic framework of one business transaction, which starting at Time 1 ends at Time 4, repeating the cycle again.

al

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

(price elasticity = 1), we get the estimate that the cash flow will fall by another SEK 2,037.5. In this case, since we can only estimate future sales volume and sales amount, it appears that the firm are faced with the economic exposure and transaction exposure. From the above analysis, we can see that both the transaction exposure and economic exposure focus on the aggregate effect of both the direct effect and indirect effect. Direct exposure, captured by transaction and translation exposure, potentially exists whenever a firm sells or buys its productions or sources in a foreign currency. Indirect exposure exists when a firm has a supplier, customer or competitor that is exposed. (Pringle J.1995). The following table (table 1) shows the effect of the home country currency appreciation or depreciation. From the table we can see the exchange rate change impact on the company.

Before the buyer signs the contract with the seller (Time 1), the seller is already exposed to the risk, even though, in this period, the exposure is not reflected in the accounting numbers and, at this moment, the exposure will only be an estimation. Neither size nor time of the exposure may be known at this time. From this point of view, some companies will identify the estimated sales volumes as a transaction exposure, others may treat it as economic exposure. Usually, firms can estimate their sales volume on the base of the historical performance. Basic on the transactions from long-term contracts with permanent customers, companies forecast the future sales volumes, keeping in mind possible deviations. The transaction will start in Time 1 and will not end until the transaction cycle is finished (Time 4). The economic exposure for one transaction is equal to the transaction exposure, though from the whole companys value point of view economic exposure can be broken down into transaction and translation exposures. In Time 2, the risk is not shown in accounting numbers as well, but it is at this point that the firms began to put into lots of costs and funds to generate that product, so this period risk may influence the firms future cash flow. Therefore, from this period, operating exposure will occur until the transaction end up in Time 4. Consequently, these will follow the translation exposure. Still, during Time 2 to Time 4, in order to fulfill the contract, the firm may need short term financing; they may get a loan from a bank at Time 2 with a certain interest rate, and then repay the loan at Time 4 with another interest rate. So, during this period they may have interest rate exposure. Consequently, they may have credit exposure as we mentioned before. Again we will go back to the example of how the exchange rate change effected the firms accounting record and look at how different exposure fits into that process. Firstly, the basic idea behind that example is operating cash flow=Sales-COGS (cost of goods sold), since both the sales amount and COGS will be changed because of the changed exchange rate, even though in the first case we assume there have no simultaneous change of the sales amount. So in this case, the firm may have operating exposure. Secondly, if we cancel the first assumption and leave valid only the second one, that is with moderate price sensitivity
11D.571.3

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

The following figure (figure no. 3) will give us a clear picture of the relationship between transaction, translation and operating exposures.

Transaction exposure Impact of having outstanding obligations, that were set before change in exchange rates, but to be settled after change in exchange rate change. Another important concept of FEE is the time horizon. FEE might be broken down into short and long-term exposures. Shortterm exposure is related to the cash flow management, while long

al

79

Purpose Based on the above problem analysis, we found out that both transaction and operating exposures measure the exchange rate change effect on the firms cash flows. The main differences between operating and transaction exposures are the following: while transaction exposure focused on expected cash flows; transaction is with more farseeing strategies.

Operating exposure is more focused on accounting cash flows, Operating exposure is usually related to the near future, while

Thus, transaction exposure is the one we are going to concentrate on, since it best represents the real companys value FEE. Though, at the same time, its effect on the companys cash flows, sales volume and pricing strategies is very controversial. At the same time, transaction exposure is the most uncertain one, due to the following reasons: exposures, because of insolvency (credit) risk of the counter party, Estimated sales volumes effect is related to the transaction or economic exposures, because of the possibility that tender price might be changed before the contract will be singed causing cancellation of tender, and, finally, no anticipated foreign currency inflows (ante natal risk), economic exposure, due to sales volume uncertainty and, consequently, foreign currency inflows uncertainty,

The sales volumes effect is related to the transaction or economic

Do cu
80

Uncertain sales volumes might cause the commercial risk or the

Expected currency inflow might be exposed to the price risk,

which generates economic exposure in a way that the listed prices might be changed due to the changed in cost or competition level.

Due to the above-mentioned grounding, we think that the exchange rate risk is the most critical among financial risks exposures. At the same time transaction exposure management is the key issue among different exposures management. Therefore, the purpose of our thesis is two fold. First, we aim to give a full overview of the existing classifications of exchange rate exposures, focusing on the one that is the most useful, i.e. Transaction exposure life span. Secondly, we would like to apply it to a couple of real companies and compare these applications in
Copy Right: Rai University

ww Co w.p m dfw P iza D rd. F com Tr i


Notes:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

term to capital investment management. In our paper, we focus on the short-term exposure management, since we believe that during long run purchasing power parity offsets exchange rate exposure. So, summing up the above problem discussion, we would like to stress the following issues:
Assuming that the transaction exposure management is the

order to help them to improve transaction exposure management strategies. One of the main things, in measuring the companys transaction exposure, is to assess if and to which extent the companys business transactions are exposed. The key concept at this stage is how to define the exposure problem that depends on the firms target set. One way to meet the purpose is to find out what kind of exposure management the companies are using. In our analysis, we will also answer the questions of whether the companies we picked up are hedging the transaction exposure and, if so, how they are doing it. Another important issue is how to choose an appropriate strategy to manage transaction exposure. While more and more firms realize that they should manage transaction exposure, not all of them have come up with the appropriate management strategy. The complexity of foreign exchange rate changes appears in the following way: it influences not only a firms existing financial position, but also sales and prices which in turn will effect the firms future value. Therefore, the choice of an appropriate transaction exposure management strategy is another task we are going to work out in our thesis. We are going to review the chosen companies transaction exposure management strategies, compare them with the theoretical framework, and make observations and notations on the differences between theory and practice. Finally, we are going to come up with our suggestions on improvements of the companies exchange rate exposure strategies. Another important reason why we have chosen this particular subject is the applicability of the topic in our future work. The knowledge gained in this subject will be very useful due to the fact that in recent years almost all companies are more or less exposed to this type of financial risk. Thus we aim at to present the companies transaction management strategies in the following way:
To present the real companies transaction exposure

most important one among the above-mentioned concepts, how do the firms actually manage transaction exposure? Since transaction exposure goes through the whole life span of firms business transaction, is there a general strategy that every firm can use in transaction exposure management?
When the firms are managing FEE, how do they choose the

time horizon? Does there exist any general rule to find an optimal time horizon?

management systems and compare them with the theoretical framework. and practice along with possible reasons, and to come up with the suggestions on the transaction management strategies improvements.

To make observations on the differences between the theory

al

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

11D.571.3

Copy Right: Rai University

81

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i

al

Do cu
82

the expenses related to hedging activities. So it is much more important to explain that the purpose of hedging is to reduce the exposure, even though some companies not only try to reduce exposure, but also try to beat the market in order to make profit. In our paper, we are following the idea that under the efficient market condition, there should be no opportunity for speculation.

Thereby, to hedge or not to hedge is a continuously debatable topic in multinational financial management. The proponents of the hedging reason it in the following way:
As we mentioned before, firms with a smoother value position

can reduce the probability of business disruption costs. According to Altman study, the average indirect bankruptcy

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 16: EXPOSURE MANAGEMENT STRATEGIES


The risk management decision is the final phase of a 3-step process. The first step is to recognize if there is an exposure, the second is to measure it, and the third is to decide whether, and in which way, to manage it. Since we already know about the existence of exposure in our example, and even measured it, now the question is how to manage it? Why hedge? Today, more and more firms try to manage the exposure through hedging. Hedging is the taking of a position either acquiring a cash flow or an asset or a contract (including a forward contract) that will rise (fall) in value and offset a drop (rise) in value of an existing position. Therefore, the main purpose of a hedge is to reduce the volatility of existing position risks caused by the exchange rate movement (smoother effect). Figure No.4 shows how the firms expected value E [V] in the home currency looks before and after hedging. Hedging narrows the distribution of the firms value about the mean of the distribution. From the figure, we can see that unless the hedging shifts the mean of distribution to the right it cant increase the firms value, what means that the hedging not only protects the purchaser against loss, but also eliminates any gain that might result from changes in exchange rates. At the same time hedging is not free; the firm must use their resources to undertake hedging activity. In order to add value through hedging, the result must not only shift the mean to the right, but also needs a net right hand shift given costs are 17,5% (Altman, 1984) of the companys value oneyear prior the bankruptcy.
Hedging will stabilize the cost accounting and price setting.

Firms with smoother value position can gain business opportunities and improve the planning capability so as to gain competitive advantage over other companies in their industry. For example, if the firm can more accurately predict future cash flows, it may be able to undertake specific investment or maintain the R&D budgeting. Thus they can introduce the new products and take an advantage of it. taxes they pay since most countries have convex corporate taxation system, the higher the profit the higher the tax percentage applicable. Therefore, for the periods the company earns a high amount of profit, it will pay higher taxes, although the periods it generate low or even negative profit, no compensation will be given.

Firms with smoother value position can reduce the amount of

Firms with smooth value position can increase their debt

capacity. Lenders are more willing to lend to the companies that have stable cash flows and enough guarantee funds. When the firms financial position is stable and cash flow predictable, it has better borrowing and investment options. Compared to individual stockholders, the firms manager has an advantage in accessing different kinds of information. The depth and the width of knowledge concerning the companys real risks and returns gives the manager the ability more precisely than anybody else decide to hedge or not to hedge.
Compared to individual stockholders, the firms manager has

an advantage in tracing market disequilibria, which could be caused by structural and institutional imperfections, as well as unexpected external shocks (oil crisis, war). Thus, the manager is in a better position than stockholders recognizing market disequilibria and, therefore, has an advantage in decisionmaking ability concerning the firms value protection through selective hedging.

Hedging opponents provide the following reasons for not hedging:


The stockholders can diversify currency risk in their portfolio in

accordance with their personal risk attitude. Therefore, the managers activity spending companys resources for hedging is useless.
As mentioned before, hedging is not a tool with which you

could increase the firms value. In other words, hedging not only protects against loss, but also eliminates the possibility to earn from it. Additionally, we should not forget that hedging is not free; the firm must use their resources to undertake hedging activity.
Usually, the manager is more risk averse than stockholders

because he concerned about his career and reputation. Therefore,

al

11D.571.3

the manager can conduct hedging activity at the stockholders expense, while it is beneficial only for him, but not for stockholders. So, if the firms target is only stockholder wealth maximization (which may not be the case), then part of hedging activity might be not in the stockholders interests.
Managers cannot forecast the market perfectly. Therefore, when

institutions have their own derivatives analysts. For example oil companies spend quite a lot of money on derivatives research, which may seem as an odd activity unrelated to the industrys main business. Why then derivatives are so popular among so many? It turns out that different businesses love derivatives for different reasons. Banks use derivatives as a powerful instrument to generate profits and hedge their risks. Businesses use derivatives as sources of additional investments and also as risk management instruments. The derivatives users base is extremely large. It even includes pensioners who can now buy options on places at retirement houses. First of all we have to define what are financial derivatives. Generally speaking, a derivative is a financial instrument whose value is derived from the price of a more basic asset called the underlying. The underlying may not necessarily be a tradable product. Examples of underlyings are shares, commodities, currencies, credits, stock market indices, weather temperatures, sunshine, results of sport matches, wind speed and so on. Basically, anything, which may have to a certain degree an unpredictable effect on any business activity, can be considered as an underlying of a certain derivative. All derivatives can be divided into two big classes:
Linear Non-linear

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

a market is in equilibrium with respect to parity conditions, the expected net present value of the hedging is zero. Thereby, managers mistakes in forecasting could result in unnecessary hedging.
One reason that leads the firms manager to hedging is the

Financial instruments used in hedging

Forward rate contracts Forward contract is the most common instrument used in hedging, mostly related to the transaction exposure. For example, if a Swedish firm is expecting to receive US$10,000 in six months and during this time US$ is depreciating, then the expected receivable is decreasing in value and vice versa. Therefore, in order to reduce this kind of exposure, the firm can go into the forward market and take a short position to sell US dollar 10,000 forward in six months, then there will show up minus US$10,000 which will balance the firms US dollar cash inflow and cash outflow.

Do cu
11D.571.3

Interest rate contracts Money market instruments are quite similar to the forward contract; they also involve a contract and a source of funds to fulfill that contract. The firm can borrow money in one currency and then exchange them to another one. After that, it can use the money generated from its business operations to repay the loan. The difference between forward and money market contracts is that the money market contract cost is predetermined by the different interest rate, while the forward contract cost is predetermined by the forward rate quotation. In efficient markets, interest rate parity should ensure that these costs remain nearly the same, but not all markets are efficient at all times. Options contracts A foreign currency option is a contract, which gives the purchaser (buyer) an option (right), but not an obligation, to buy or to sell a certain amount of foreign currency or other securities at a fixed price per unit on a specific date or during a certain time period (Eiteman, 1997, p. 150). During past years, more and more firms started to use options as a tool to hedge. We also noticed this trend from the later surveys results. A number of banks in the United States and other capital markets offer flexible foreign currency options on transactions of 1 million USD or more (Eiteman, 1997, p. 151). Why Derivatives? There is not a single investment bank, which does not have a derivatives desk. Moreover, now even some non-financial

ww Co w.p m dfw P iza D rd. F com Tr i


Forwards and Futures Swaps Options Convertibles Equity Linked Bonds Reinsurances Copy Right: Rai University

account veil, because, in the income statement, foreign exchange loss is a separate line, which is highly visible, while the hedging costs are hidden in operating or interest expenses. Thats why manager prefer some additional hedging costs instead of having foreign exchange losses.

Linear are derivatives whose values depend linearly on the underlyings value. This includes

Non-linear are derivatives whose value is a non-linear function of the underlying. This includes

One can add some other instruments to both of the two classes. For example, bonds can be viewed as non-linear derivatives with the interest rate being a non-tradable underlying. During the course we will talk about each of the listed above types of derivative products. Although the main goal of my course is not to teach how to use various derivatives, but rather how to price them, I will try to explain the most common applications of some of the derivatives. Forwards and Futures Forward is a contract between two parties agreeing that at certain time in the future one party will deliver a pre-agreed quantity of some underlying asset (or its cash equivalent in the case of non-tradable underlyings) and the other party will pay a pre-agreed amount of money for it. This amount of money is called the forward price. Once the contract is signed, the two parties are legally bound by its conditions: the time of delivery, the quantity of the underlying and the forward price. While the delivery time and the delivery quantity of the underlying asset can be fixed without any problem, the question is how the parties can agree on the future price of the underlying when the

al

83

latter can change randomly due to market price fluctuations. It turns that in the case of forward contracts there exists what is called the fair future price of the asset. This can be found as follows. Suppose we sell a one-year forward contract meaning that we take ar e s p o n s i b i l i t yt od e l i v e ri no n ey e a rac e r t a i nq u a n t i t y , ns ,a of y the underlying asset whose current market price is S . In order to avoid any expose to the market risk, we can borrow from a friendly bank the amount n X S and buy the necessary quantity of the underlying. In other words, we sell a covered forward. At the end of the year, we will deliver the asset to the buyer of the forward contract who will pay us the forward price n X F. From this amount we have to repay the bank our loan which obviously grew to (1 + r) X S , where r is the one-year interest rate quoted by our bank. Thus, at the end of the year our cash flow is] F (1+r) X S Since we started with no money, we have to end with no money. Otherwise, by selling or buying forward contracts we would be able to make unlimited profit without taking any risk. This is not possible in practice. Therefore, we have to impose the following constrain F (1+r) X S = 0 F = (1+r) X S This gives us the forward price

The Forward Price=(1+0.0684) X 334.25=357.11 Futures are standardized forwards which are traded on exchanges. All futures positions are marked to the market at the end of every working day. To illustrate this procedure let us suppose that we bought a three months futures contract on crude oil for 30 per barrel. The next day the futures closing price for the same delivery date is 31 per barrel. This means that our contract has gained one Euro, because at the maturity we still have to pay only 30. In this case, the seller of the futures contract immediately pays 1 into our account. Suppose that one day after, the futures closing price dropped to 29. Now we have to pay two Euros to the sellers account. By this time our contract lost one Euro. This process continues to the maturity date. Because of the specific mechanism adopted by futures exchanges, contracts are settled in cash and only in some special cases the seller has to physically deliver the asset (especially, in commodities markets). We can draw the following table:

If any other price is written in the forward contract, one of the parties will be able to make a risk-less profit by selling or buying the contracts in large (theoretically unlimited) quantities. The considered above example shows us how all specifications of a forward contract can be fixed by the parties to their mutual satisfaction. The argument, which helped us to discover the forward price was that there, must be no trading strategy allowing for a risk-free profit. This is called the no-arbitrage principle. There is another requirement, which has to be fulfilled when we look for the forward price. This is that the cash flow must remain equal to zero at any time to expiration of the forward contract. We only showed that our cash flow is zero at the starting and ending points of the contract. However, we did not consider what is happening to the cash flow at intermediate moments. In order to investigate this problem, we need to know the model describing the behaviour of the underlying asset. We will return to this question when we will talk about random motion of share prices. For the time being we will assume that in the example considered the cash flow does vanish at all times before the maturity. We will prove this assumption later on.

Do cu
T he underlying =British Airways
84

Another assumption is that the interest rate r does not change in time. This might be true for short periods of time. However, for one year it will be most certainly a wrong conjecture. However, the effects of changing interest rates are quite negligible and can be ignored in many cases. Let us consider a specific example based on real data taken from the Financial Times Example: The spot price=334.25 (31 August 2000) The time to maturity=Six months (0.5 a year) The six-month risk -free interest rate =6.84%

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

At the end of the third day, the seller of the futures contract is better off by 2. Forwards and futures are designed to reduce risks related to the uncertainty of future market prices for both sellers and buyers of underlying assets. By entering this type of contracts, both sides achieve complete certainty about their future positions, which may help them to have a better control over their financial resources. However, many traders take futures positions for purely speculative reasons. For instance, if we sell an uncovered futures contract (i.e., when we do not have a long position in the underlying asset), then when the asset price goes down, our futures position will gain a profit and vice versa. In what follows we will be using the following definitions. A long position in an asset is a position that benefits from price increases in that security (an investor who buys a share has a long position, but an equivalent long position can also be established with derivatives). A short position benefits from price decreases in the security. A short position is often established through a short sale. To sell a security short, one borrows the security and sells it. When one unwinds the short sale, one has to buy the security back in the market to return it to the lender. One then benefits from the short sale if the assets price is lower when one buys it back than it was when one sold it. Swaps Swaps, as the name suggests, are instruments, which allow a swap holder to receive a floating interest rate from and pay a fixed interest rate to a swap seller for a certain period of time. The interest rates are paid on the same fixed notional principal.

al

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Swaps can be arranged in various ways. For example, there are swaps between different currencies, in which case the parties swap domestic and foreign rates. A swap can be priced as a combination of bonds. Bonds Bonds are securities, which pay a certain fixed amount on a certain fixed date in the future. Since we know how much we will get on some future date, we can find the present value of the notional by discounting this amount to the present time with respect to a certain interest rate. If the rate was known in advance, the price of the bond would be very easy to calculate. For example, if the rate is fixed and equal to 5% per annum, then the one-year bond with the notional value 1000 should now cost

There are three main categories of options: European, American and Bermudan. European options can be exercised only at expiration time. American options can be exercised at any moment prior to maturity. Bermudan options can be exercised prior to maturity but on certain pre-determined days. Put options give the right to sell the underlying asset. Calls give the right to buy the underlying assets. There exist also chooser options, when the option holder has the right to chose between call and put payoffs. There are hundreds of different types of options, which differ in their payoff structures, path-dependence, and payoff trigger and termination conditions. Operating management hedging strategies
Matching

However, in reality interest rates are not known in advance, at least, not for very long periods of time. Therefore, the pricing of bonds represents a challenging problem, which involves various assumptions about the behaviour of interest rates.

Swaps can be priced in terms of bonds. Let us consider a swap with N payments at times Ti. The outgoing fixed-rate part of the swap is given by

The incoming floating-rate part can be presented as follows

Indeed, the bond B (t, T i ) can be expressed in terms of the bond B (t, T i1) as follows

Do cu
11D.571.3

Where is the interest earned from the time T i 1 to T i discounted to the present time.

Obviously, coincides with the incoming floating-rate contribution into the swap.

Taking into account that B (t, T 0) = 1, we find All in all, the value of the swap is given by We can now see that a swap from the floating-rate receiver side can be presented as a combination of short positions in bonds with different maturities. Normally, the fixed rate is chosen so that the swap present value is equal to zero. Swaps are extremely liquid instruments and, therefore, their market values can be used to price bonds. Options Options are the most flexible of all derivatives because they give an option holder a multiple choice at various moments during the lifetime of the option contract. However, an option seller does not have such flexibility and always has to fulfill the option holders requests. For this reason, the option buyer has to pay a premium to the option seller.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Matching, also called natural hedging, is a way to decrease currency exposure by covering cash outflows by inflow in the same currency. The firm can use natural hedging in several ways. In the above example, where Swedish firm has expected USD cash inflow, if it would acquire the same amount in debts (including interest) in the United States market for the same period. In order to have US dollar outflows on inflow day, it would have had an opportunity to pay its debt, including the principal and interest, without any hedging need. It is similar to the money market hedge mentioned before. Another way is based on the operating strategy changes. The company can set a foreign subsidiaries basing on the market concentration. Lets say that the Swedish firm from the previous example, which has a lot of cash inflows in USD, can open a manufacturing subsidiary in USA, which would incur cash outflows in USD (subsidiarys cost). The Swedish companys exposure to USD dollars would thereby be effectively covered. In the later case study, we found out that one of the companies (SKF) is trying to follows the later pattern. This activity is relatively effective in eliminating currency exposure when the firms cash flow can be constantly predicted over time. Thus, the main advantage of natural hedging is that transaction exposure can be effectively covered without any transaction cost. Another advantage is that the matching strategy offers a particular advantage to companies, which are subject to exchange rate control regulation that constrains their activities in the foreign exchange market. For example, it provides an acceptable solution to the problem where it is apparent that an exposure exists but there is no coverable exposure as such defined for purposes of exchange control. Even though the concept of matching is simple, there are a number of complexities associated with using the technique. For example, the time periods used by companies in the management of their exposures will vary with the nature of their business. If a chosen period is too short, then the number of time periods will quickly escalate, adding work to the data collectors and increasing the number of specific decisions. It is likely, therefore, that the exposures being matched out will be those arising over a period as long as a month, or even more. Risk sharing Risk sharing means that the seller and buyer agree to share the currency risk in order to keep the long term relationship based on the product quality and supplier reliability, so they will not
85

al

destroy the long term relationship just because of the unpredicted exchange rate change. Following our previous example, if the spot rate is SEK 8.5/US$, six months later, the spot rate turns to be SEK 9/US$, then the Swedish firm, which expected to receive SEK 8500 will get SEK 9000. In this case, if both contract parties agree to sharing the risk, for example, each party offer half, then Swedish firm can agree to receive $10,000* (8,5+(9-8,5)/2)= SEK8750. So, the risk sharing arrangement is intended to smooth the impact on both parties, of volatile and unpredictable exchange rate movements, and the firms can still use this strategy to manage the cash flow exposure. Netting An alternative method to the previous one is to use a netting system. This system is often based on a re-invoice center establishment, where each separate subsidiary deals only with its own currency, leaving all the transaction exposure to re-invoicing center. There are some advantages of re-invoice center:

It is easy to control the overall firms activity when all the currency

exposure is netted in one place, thus ensure that the firm as a whole follows a consistent policy. Lower transaction cost because of the centralized netting system.
Each subsidiary can concentrate on what they are specialized in.

There still exist some drawbacks to the re-invoice center. For example, the netting system insulates the internal suppliers from their ultimate external customer market, which will mislead the firm to set sub optimal pricing and other commercial decisions. A firms re-invoice center can measure the transaction exposure on daily, monthly or even quarterly basis depending on the firms exposure management policy. As we mentioned before, most firms act simultaneously as buyers and sellers on the international markets for commodities (so they have to manage both the accounts payable and accounts receivable in a single foreign currency)

Do cu
86

The following example (example no. 3) will explain how the reinvoice centers measure transaction exposure based on the weekly data with respect to different foreign currency. In the following case, we see that the transaction exposure is basically gap between the firms accounts receivable and accounts payable. However, different companies can use different information based on the firms special condition, such as the call, order or import and export data to measure exposure. Example No.3 Netting transaction exposure

This kind of transaction exposure management can very quickly provide the firm with an overview of the short period exchange rate risk. So, if one currency appreciated (depreciated) at a certain percentage than the net exposure before hedge will appreciated (depreciated) at the same percentage. Thus, the firm can, based on this information, to find the way out in the financial market. On the other hand, the above mentioned example is the most
Copy Right: Rai University 11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Notes:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

simple version of netting strategy, based on assumption that the accounts receivable and accounts payable are all due on the same period. However thats not always the case. If the firm have a large number of transactions due in the different period, when the measure and the hedging of netting exposure depends on each transaction time horizon, on a separate or aggregate contracts specificity ect, then the calculation of the best netting period as well as hedging amount becomes quite complicated. Practical strategies Pricing strategy The case that we presented in chapter one explained how the exchange rate change affect the firms cash flow. Pricing strategy and demand sensitivity to competitors price are two important factors, which affect the firms exchange exposure. Therefore, it would be logical to presume that if we set a flexible pricing strategy, then the firm can handle the exchange rate exposure easily. However thats not always the case. As a matter of fact, some industries such as chemical, petroleum and mining businesses have few pricing decisions to make relative to the currency risk, since those industries are very large depend on economies of scale which means they are pricing taker instead of pricing setter. For example, in the SKF case, the company whose activity we are going to analyze later, the buyer not the seller dictates the price. Additionally, there still exists some costs associated with pricing changing policy; such as: long term customer relationship, the customers loyalty to the firm, and so on. Diversification From above mentioned Pringles analysis, we may get an impression that the firms can manage the currency exposure through diversification of both operating and financial policies. From the first sight, we may say that diversification of both strategies gives a lot of choices. The firm can diversify its operations through, such branches of its activity as, sales, location of production facilities, raw material sources, while financial policy diversification can be done using funds in more than one capital market and in more than one currency. However, its not always an easy way. Some industry may require large economies of scale that it are not feasible to diversify its production location, maybe some firm are too small to be known by the international investors or lenders. Thereby, especially operating strategys diversification can be used mostly depend on the firms characteristic. In the later case study, we will look if this kind of strategy is feasible.

al

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 17: CASE STUDY IN SKF


Overview of SKF
SKF is a company with a long history behind. It was founded in 1907. It started as a manufacturing company and soon became the leading manufacturer in the bearing industry and has maintained this position ever since. Recently, service business is also becoming an increasingly important part of the SKF Groups operations. SKFs central office is in Gothenburg. The company has a network made up of its own sales companies in some 50 countries, plus more than 7000 independent distributors and dealers worldwide. SKF manufactures its products at some 80-production sites in 22 countries.

The SKF business is organized in six Divisions and one area covering operations related to the aviation industry. SKF is a one of the biggest joint ventures in Sweden. Nearly 44,3 % shares representing 22,6% of voting rights were owned by foreigners (30/12/1999). The biggest Swedish shareholder is Investor AB having 14,2% of the shares representing 28,8 voting rights. Main products, suppliers, customers, competitors and net sales distribution The main products of the company are bearings, seals, and special steel and steel components. SKF has 15-17% of the world market and 30% of Europes market for ball bearings. Principal competitors have the greater part of their production capacity in the following regions: four competitors in Japan (about 25-30% of the worlds market), two competitors in USA and two competitors in Europe. SKFs main raw material is steel, and 50% of the steel the company uses, is making by SKF itself. SKFs manufacturing is widely spread geographically, but with a concentration to continental Europe, USA and Sweden. Though especially during the latest 10-15 the situation changed from the manufacturing being concentrated in continental Europe to almost evenly spread among European, USA and emerging Asian markets. The company is quite successfully trying to reach that the subsidiaries would have less difference between export and import or, in other words, that most of cash outflows would be covered by cash inflows in foreign currency. This, as it will be explained later is a very favourable condition for the natural hedging strategy. During the latest 1015 years the difference between import and export in the main USA, European and Asian markets from being 25-30 % decreased to 20%. The following figures (figure 5&6) describe the net sales distribution by geographical areas and customer segments. Figure No.5 SKF net sales by customer segment 1999

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Overview of the companys latest years activity and future plans As an introduction, we would like to give some key data on the size and performance of the company, as follows (table 2): Table No.2 SKF key data on size and performance

All above given and the other numbers presented in this chapter

are taken from the companys Annual Report of 1999. During 1999, SKF increased its earnings due to the inventory, real estate and employees reduction carried out in the context of weak market demand. The Groups net sales decreased by 2,6 % owning
87

al

The Groups financial policy defines currency, interest rate and credit risks, establishes responsibility and authority for managing these risks. The policy states that the objective is to eliminate or minimize risk and to contribute to a better return through an active management of risks. SKFs financing policy is that the financing of the Groups operations should be long term (maturities exceeding three years). As of December 1999, the average maturity of SKFs loan was 4.5 years. SKF should have an additional payment capacity in the form of surplus liquidity and/or long term credit facilities, amounting to approximately MUSD 350. The group has been assigned BBB+ rating for long term credits by Standard & Poors and Baa2 rating by Moodys Investors Service. An analysis of SKFs financial risk management

The management of the financial risk, and the responsibility for all treasury operations, are largely centralized in the SKF Treasury Center (see figure7.), the Groups internal bank. This means that all the currency exchange operations inside the company or with the other companies, exposure measurement (partly), hedging and financing operations are made there. For example, if SKF subsidiary in France sells something to the German buyer (if we exclude the possibility of settling the payment in EUR), then the buyer might set a requirement of settling the payment in DEM. If the payment is deferred, the subsidiary informs the center about the currency exposure and at the payment day makes the DED/ FRF currency exchanges in the internal bank. If the payment is made at sight, then only the currency exchange takes place. Thereby the French unit of the company all the payments get in FRF. Thereby the subsidiaries (as shown below) have no currency risk. Every unit of the company pays and gets payments only in its local currency.

Do cu
88

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

to volume (5%), currency effects (+2%) and price/mix (+0,5%). The improvements in the price/mix factor, and part of the decline in volume; are attributed to the new strategy initiated in 1998, which prioritized profitability and discontinued unprofitable business. Capacity utilization to the low demand and inventory reduction was unsatisfactory last year. In the future, the company is going to pay more attention to the new technologies and to use them as an instrument to create more profitable business for the Group. The Group is now focusing on expanding its activities in several different areas especially emphasizing service and maintenance. The latest two areas have high priority and are growing both organically and by acquisitions. The company is also going to go on with the reshaping program, selling out unprofitable business and looking for new perspective areas. Financial objectives The financial objective the company set itself, as it was indicated in annual report, is value creation for its shareholders. The financial risk management objective, the representative of the company defined as: not to have any unexpected surprises or in other words cash flows smoothing.

In the Annual Report of the year 1999, we found the following notation regarding the type of the exchange rate risk the company is focusing on: The most important currency risk to which the Group exposed is changes in the exchange rates, which affect the future flows of payments. The main aim managing the currency risk is made on transaction exposure. The companys exposure measurement starting point is the moment when the billing exposure appears in the companys accounting record representing goods shipment and invoice for these goods presentation to the buyer. The accounting records representing described transactions during the month are made in the subsidiaries and at the last day of the month are send to the Treasury center. For example (example no. 4), on the last day of July 1999, SKF Treasury center received from the Austrian subsidiary the following report:

Export and import are expressed on the base of

subsidiarys sent and received invoices amounts

Based on the information received from the foreign subsidiaries and Swedish units of the company, Treasury center makes the overall companys exchange rate exposure report by each currency. The company has a special internal rule regarding the companys exchange rate exposure period, saying that the companys billing exposure period (representing the trade transaction within the Europe) can not be longer than 10 days. For the trade transactions among different continents, the period should not exceed 40 days. Therefore, the companys exchange rate exposure for one transaction does not exceed 40 days. Since 90% of the trading transactions are made in Europe, 90% of the payments for goods are settled within 10 days. The period of the majority of trading transactions is 3-6 months. The companys exchange rate exposure measurement is made on the monthly basis (as described above) at the last day of the month. The Companys exposure hedging horizon is three months. The hedging activity takes place four times a year as follows: in the
11D.571.3

al

beginning of January for the first quarter, in the beginning of April for the second, etc. The hedging volume is based on so called Prognos, which is the forecast of invoicing amount for the coming quarter basing on the last years performance. The forecast is calculated in the following way: Example No.5 Prognos Companys exposure is US dollars: The performance for last year: (January+February+March+April+December)= 200 000 The performance for last month (December)=20 000*12=240 000 The performance for last three months: (October+November+December)=(13000+12000+20000)*4=170 000 The performance for last six months:(July,August,,December) =(9000+17000+18000+13000+12000+20000)*2=178 000

in mind the latest 10-15 years companys strategy of foreign currencies inflows and outflows balance, more and more natural hedging is taking place. Forwards are the main instruments the company uses for exposure hedging. Options are sometimes used as well, but only in the cases when it is big probability that the particular currency will increase (decrease) below (above) certain limit. Options are used more widely for currency trading purposes. Though, as we can see below, the trading portfolio in the company is comparatively small. For the better grasp of the magnitude of financial derivative instruments the company use, the following numbers in MSEK are given (table 3):

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Prognos=180 000 (roughly equal to last years US dollars net position and might be adjusted to the coming years companys strategy; for example, shutting down part of the production or signing a big long lasting contract with the new customer in a particular subsidiary).

Do cu
11D.571.3

Prognos, or the forecast, is made for the one-year period and might be adjusted if any significant deviations from the forecast in the real invoicing amount take place. Otherwise, according to our example, the Prognos for the first quarter the company is going to have 45 000 USD excess inflows than outflows. Then the company, at the beginning of January, will sell 45 000 USD three months forward. To the question if the company is looking back and checking if the forecast was right, the representative answered positively, not identifying the real numbers. To the question how often the Prognos needs to be adjusted, the representative answered, that not often. Due to the fact that approximately 50 % of the companys customers are permanent and the business cycles are comparatively predictable, it seems quite reasonable to presume that the invoicing forecast for the coming quarters might be quite accurate. The companys objective is to have zero currency risk and according to the representative they are close to it. As we can observe, the company is hedging 100% of its exposure. The currency of denomination is Swedish Krona. Speaking about the hedging, the season of the year should be taken in to account. During summer or Christmas holiday less attention to the hedging position adjustment needed. Talking about the exposure hedging time period, we should point out that the companys hedging period policy changed due to Swedish Krona devaluation, which was in the end of 1992. Before, the company was hedging on the yearly basis, but starting from 1994 changed to quarterly. One more important thing to mention is transaction costs. SKF treasury department is costs center and, to our knowledge, is not accounting or anyhow recording transactions cost. The main companys exposure management strategy is netting. Companys exposure management system, as it is today hasnt been changed in the last 5-6 years. The representative of the company expressed his full satisfaction with it. Although, keeping

ww Co w.p m dfw P iza D rd. F com Tr i


One more specific factor related to the SKF business exchange rate exposure management should be mentioned; SKF has a comparatively mall pricing strategy changing capability based on its specific consumer situation. The biggest part of SKF prices is locked in the big volumes, long-term contracts with the car industries, original equipment manufacturers (30% of productions), or the paper industry (26% of production). Only 44% of the SKF productions are sold to distributors selling bearings for replacement. The latter cluster of the customers is the only one SKF could use its pricing strategy on, in the case of drastic exchange rates changes. Lets take the companys most common business transaction and try to fit it into the transaction life span as it was described in the
89 Copy Right: Rai University

al

Do cu
90

Expenses are made and the risk of incurring losses appears, creating the credit risk, in the case the buyer wont pay the bill. To our knowledge, during Time 1-Time 2, SKF is making only some kind of interest and credit exposures management, but since interest rate and credit risk exposure are out of our concern we are not going to go deeper into that topic. At Time 3 SKF ships the goods and bills the buyer. Starting from that moment the amount of cash flow receivables appears in the SKF accounting books. All above described exposures, which appeared at Time 1 and Time 2 increase by shipment and any Other cost related with the transporting (in the case the SKF is paying). SKF is measuring only part of transaction exposure within Time 3- Time 4. In 1998, the company stopped the hedging of the translations exposure. In 1994, the company was hedging the translation exposure for 100 %, although during the 1995-1997 the company was decreasing the percentage hedged and starting from 1998 stopped to hedge translation exposure. The reason for leaving the translation exposure unhedged, according to the representative of the company, was the single EURO currency system. At the beginning of every month, subsidiaries net currency position (the excess of import of export) are pooled into one

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

figure no.1. A majority of SKF transactions take place according to the long term contracts, meaning that the same transactions follows each other for 5- 6 years under already signed contracts. But, for the very first one, transaction life span starts exactly as it was described in the figure, when SKF quotes the price for the buyer. It is clear that, starting from that moment; the company already starts to have exchange rate exposure. In other words, if, after that moment, the exchange rate would drastically change, the company would risk incurring losses if it follows the tender price or might lose the buyer in the case it would decide to increase prices. Therefore, the company is already exposed to the Quotation exposure or part of the transaction exposure, though still without any effects on the cash flows. SKF is not measuring or managing Quotation exposure. Backlog exposure starts from the contract signing moment. From that moment SKF starts to produce steel and bearings and the expenses starts to increase. One important thing to mention is that almost all the bearings are produced in Sweden from steel, which the company is producing by itself as well in Sweden. Therefore, only cash inflows for the production sold are exchange rate exposed and almost no costs are exposed. In the case of unfavorable exchange rate changes, the company will generate less cash inflow than expected, while the fixed costs are already made, creating cash flow exposure (part of the transaction exposure). Under the transactions, financed from the external sources (bank loans etc.) in order to buy the raw materials and start the production, if unfavorable interest rate change would take place (drop down) the company would have to pay the higher than in the market interest rate. Thereby, from Time 2 the interest rate exposure starts. The more expenses made (raw material, steel production, bearing production) the more loan or credit line from the bank is used, the higher interest rate exposure incurred. Meaning that the interest rate exposure increases during Time 1-Time 2. During Time 2-Time 4 the company produces and sends the production along with the bill to the buyer. The accounting record is made showing the amount of the bill as account receivable.

pot. Forward contracts are the financial instruments are mostly used for the hedging companys exposure position. In the example given in the beginning of the chapter, the Austrian subsidiary is exporting 40.569 ATS more than importing; therefore the whole SKF is exposed to ATS. To hedge that exposure, 40.569 ATS forward had to be sold to insure the company against the exchange rate risk on the same amount of ATS inflow. Observations and suggestions for SKF After the analysis of the SKFs actual transaction exposure management system, we made the following observations and suggestions. Starting moment of the exposure management Observation: The first one is concerned with the starting moment of the transaction exposure hedging, in respect to the theoretical transaction life span. In the case of SKF, we found out that SKF start to hedge transaction exposure from Time 3, even though they have signed the contracts with the customers in the second period. Suggestions: Based on the knowledge we received, we recommend that the company should at least start to hedge during the period when they signed the contract. In Kenyons book, he said we can take action to hedge the risks as soon as we are sure to have landed out contract, this is usually at signature, sometimes later sometimes earlier(Kenyon, 1981, p. 82), otherwise they are bearing the currency risk during that period. It is obvious that once a quotation has been submitted, the exposure appears and the firm needs to consider how to manage that exposure. Another interesting things is that, being a big international manufacturing companies, SKF, have a large percentage of long-term contracts and permanent customers. Therefore, a big part of SKF prices are locked in the big volume, long term contracts with the car industries original equipment manufacturers (30% of productions) or paper industry (26% of production). Whether or not to hedge the potential exposure during the period of quotation is an important issue. From the moment the firm starts to be exposed, hedging decision will be accompanied with transaction cost. Prior to that point, the evaluation of the contract signing probability should be made. If the chances of winning the contract were low, then attempting to cover the risk would seem more like speculation than hedging. At this point, the firms need to consider theirs historical performance carefully. So summing up above mentioned problem, we would like to make the following suggestions: The company could evaluate the customers and set them into different clusters depending on the historical performance, which is the performance of theirs contracts fulfilling: permanent, less permanent and un-permanent. Depending on the cluster the customer belongs, to set different hedging percentage. For example, for those customers, whom belong to the permanent customer category, the firm can hedge 100% expected transaction exposure at the quotation period. For those customers who belong to the less permanent customer category, 80% hedging of the contracts volume after the moment that contract has been signed; and finally those customers, who belong to the un-permanent category, 50% hedging. We think that the latter way of customer grouping would help the company to avoid big fluctuations resulted from different types of customers.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

Inter-company indirect transaction risk Observation: The second observation is about the inter-company indirect sales transaction risk. As we described before, SKFs treasury department dealing with the companys whole financial exposure management, while the subsidiaries do not take any exchange rate risk at all. Suggestions: In fact, if we look deeper into SKF groups internal transactions, we might find that there exists currency exposure in subsidiaries. From the representative of the treasury department, we found out that the company has set an internal forward rate usage for next quarter invoices rule. Since we have not received any other information from the subsidiaries, we will set up an example to explain how the exchange rate change effect the groups internal transactions. Assume that the director of the United States decides the final price of the subsidiarys production in the United States market subsidiary (it is common that the pricing decision is taken at the country where the goods are sold to outside customers). The forward rate is 1US$=8 SEK, and the inter-companys price is 280 SEK/unit, which costs the United States subsidiary 35US$/ unit, with a minimum gross margin of 12,5% of selling price, which means the final price in the United States markets is 40US$/ unit. Now with the exchange rate change, say that the forward rate change to 1US$=8,75 SEK, then the costs for the United States subsidiary will turn to be 32US$/unit, still the final price is 40US$/ unit, then the united states subsidiary will have a gross margin of 20%. It seems to be well enough, though on the other hand the United States subsidiary have an alternative to use the windfall cost reduction in selling price reduction in order to get extra sales and market shares. The third alternative is to adopt a halfway position in terms of usage of part of the saving for a price reduction and part for the profit margin enlargement. As we can see from the example, the exposure, which SKF probably didnt consider, still exist in the subsidiaries. Thus, theoretically, if the intercompany price between SKF and its subsidiary in United States were fixed to the US dollar, then above mentioned exposure would not happen. Risk sharing

floating price setting. The mentioned ways of risk sharing usually are used as follows:
To share the exchange rate risk by so called currency clause,

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

meaning that if USD/SEK (following our example) exchange rate changes the contractual party, which are getting advantage of that change compensating half of the advantage to the other party;
To set a floating production price, which would float in

accordance with the exchange rate changes. Hedging period and transactions cost balancing Observation: SKFs Treasury department is the cost center unit. The company is not accounting or recording hedging transactions cost. Suggestion: As we know from the basic economic theory, forward price is about 3% of the transaction amount. Therefore the bigger the amount, the higher the transaction costs. SKF is big multinational company operating with big exposed amounts, therefore it would be reasonable to presume that the transactions costs compose a significant amount. The more seldom exposure is hedged, the less transactions and consequently less transaction costs will incur. Since SKF is hedging four times a year, if any unexpected exchange rate changes take place, the transactions cost might seem not be so significant. On the other hand, in our turbulent word of changes, one should be very careful making a decision about the hedging period, since the longer the period, the bigger the probability of unexpected exchange rate changes. Depending on the hedging transactions volume and frequency, the transactions cost might reach a level from which it might be more costly than useful to hedge. Therefore, due to the abovementioned reasons as well as for the further possible surveys, we think that it would be very useful if the company would start to record transactions costs. The other crucial issue is transaction costs and hedging period balance. As was discussed above, the more often and the bigger amounts one hedges, the more transaction costs one will have and the other way round. Though the more frequent exposure is hedged, the less variability in cash flows and the less probability of unexpected exchange rate changes will occur. Therefore, the appropriate balance of the hedging period and transactions costs should be made. For that kind of balance the transaction costs and the hedging period should be known. Unfortunately the company has no transaction costs records. Thus, the only thing we can suggest is to start to record transaction costs and then have both transaction costs and hedging frequency try to find out if the hedging costs are not getting higher than the level and when hedging starts to be more expensive than useful. Hedging horizon and financial risk Observation: SKFs hedging period is three months. Hedging volume is determined by the forecast (Prognos), which on its turn is based on the historical performance. Forecast is adjusted during that period, if any changes appear. Suggestion: The length of the hedging period is a debatable question, although one of the factors that affect it is hedging transaction cost and period balance. Another factor is the firms readiness to take a certain amount of risk. As it was mentioned before, SKF is hedging its transaction exposure every three months.

Do cu
11D.571.3

Observation: as was mentioned above, about 50 % of the companys contracts are long term (3-5 years) with the permanent contractual parties. Production prices in such contracts are based on tender prices and usually are fixed. The company is not using any exchange rates clauses in the contracts, but takes all the risk on theirs behalf or some kind of exchange rate changes expectation includes in to the price.

Suggestion: Lets take an example and presume that the company is signing a contract for selling 1 million bearings in 5 years for fixed price based on the tender. The contract is in US dollars. It means that during that period, if the exchange rate changes, in the way that USD value will increase in comparison with SEK, then the Swedish company will incur losses. Another important point is that the longer the validity period of the contract the bigger the uncertainty about the exchange rate changes. In our example, it is 5 years plus tender period. Since the companys financial risk objective is to have zero exchange rate risk, we think that the best solution of this problem is to share the exchange rate changes risk using so called currency clauses or pricing strategy element, such as:

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

91

Notes:

Do cu
92

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 11D.571.3

For example, at the beginning of January, they are hedging for January, February and March. At the beginning of April, they are hedging for April, May and June. If any big fluctuations in the first quarters exchange rate appears or significant deviations from the billing forecast would takes place, the forecast amount would be adjusted in the second quarter accordingly. However, if any changes would take place in the first quarter no action is going to be made. For example, if in the middle of January a big change in the main currencies exchange rate occurs, adjustment to the forecast would be made only in the beginning of April. Thereby, it might result in a comparatively large, 2,5 month unhedged period, that might dramatically deteriorate SKFs net position. Thus, we think, that in order to avoid such kind of un-hedged periods, adjustments should be made every month. For example, at the beginning of January, they can hedge for one quarter, that is January, February and March and in the beginning of February, they can hedge for another three months, that is February, March and April. In this way, they can avoid unexpected fluctuation caused by the short period exchange rate changes.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 18: CASE STUDY IN ELOF HANSSO


Overview of Elof Hansson
Elof Hansson is a trading company with an even longer history than the SKFs one. The company was founded by the merchant Elof Hansson in 1897. The Board of Directors and Managing Director gives the following presentation of the company (in annual report of the fiscal year 1999): Elof Hansson, whose registered office is in Gothenburg, is the parent company of the Elof Hansson Group. The company undertakes international trading in three business areas Forest, Industrial and Consumer Products. Sales are handled via subsidiaries, branch officers and agents in more than one hundred countries. Elof-Hanson AB is owned to 99.9 % by Elof Hanssons Stiftelse (The Elof Hnasson Foundation) or, in other words, is family owned business. Main business areas, suppliers, customers, competitors and sales distribution Forest products area accounts for the largest proportion of the business operations of the Group, which is explained by the fact that merchant Elof Hansson, who founded the company, did his first deals in paper and pulp. The largest customers of the paper products are found in Latin America, several African countries, Asia, the Middle East and the Far East. The suppliers are primarily located in Scandinavia and North America, but also in Russia and South America. Traditionally, paper pulp suppliers are found in Scandinavia and North America, however the incidence of new suppliers for companys customers in Asia, the Middle East and North Africa, has risen. The industrial area is the second largest, comprising of steel, pipes, forgings and castings. Industrial Products Division supplies mentioned products mostly within Sweden. The customers of the industrial products are found in Central America, the Caribbean and Mexico. Customers products are the third largest field of business activity and the consumers of these products are settled in some fifteen European countries, primarily in Scandinavia and the Baltic region. The main suppliers countries are China, India, Pakistan, Rumania, Taiwan, Hong Kong and Turkey.

Do cu
11D.571.3

Due to the specificity of the business competitors might be not only the other trading houses, but also companys suppliers and customers. Since Elof Hansson is the middleman, the cases when the previous suppliers stars to sell their production directly to the previous customers, makes the latter a competitor. According to the representative of the company, Mr.Henrik Jerner, the main competitors in Sweden are the following trading houses: Ekman and Co., Cellmark. One of the main foreign competitors is Japaneses trading house Sumitomo. The following figures, No.8 and No.9, describe the net sales distribution by geographical areas and customer segments. Figure No.8 Elof Hansson business volume by products 1999

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Overview of the companys latest years activity and future plans Following the SKFs data presentation order, an introduction some key figures on the size and performance of the company will be presented in the following table: Table No.5 Elof Hansson key data on size and performance

All above given and the other numbers presented in this chapter

are taken from the companys annual report of 1999. Operating profit, as well as the business volume of the company in 1999, was the lowest during the last five years. The Chairman of the Board of Directors in the companys annual report explained it through weak demand and subsequent shortage of quantities during the latter half a year. Though taking into account several strategically important activities were undertaken the financial results

al
93

COE found acceptable. The representative of the company on the interview as the main factor, which diminished business volume and income for 1999, indicated Asian crisis. The companys nearest future activity is going to be concentrated on the main forest products area, since the trend in this area remains positive. The trade in forest products in future will also have a dominant role within the company. Demand for paper is evaluated as remaining high with the world consumption rising annually by 1-2%, that is, by five-seven million tons. Among forest products, timber and downgraded paper has a strong standing on this market, and will be further reinforced in the current year. Downgraded paper through a continuing expansion in Asia. In the coming years, marketing will intensify and include non-Swedish markets, primarily the rest of Scandinavia and the Baltic region. Elof Hanssons financial objectives Since the company is basically owned by one family, the representative of the company indicated that the main financial operating objective is as it usually is for private business enterprises: profit. Financial risk management objective was defined as cash flows smoothing as well as the advantage of the exchange rate changes taking speculating with the available liquid assests. Financial policy of the Group defines exchange rates, interest and credit risks. The company is managing them actively.

companies is organized in the same way as in SKF; every unit of the company pays and gets payments only in its local currency. Exchange is made in the internal bank (Treasury Department). As opposite to SKF, a majority of EH contracts are short or middle terms, and majority of suppliers and customers are temporal. That requires a stricter management attitude. Therefore, EH measures and manages exchange rate exposure on the daily basis. Every day the treasury center measures exchange rate risk exposure in the similar way as the SKF does in the end of the month. The main companys exposure management strategy is netting. Basing on the during the day received foreign exchange orders from the subsidiarys, treasury department net the numbers and buys on the spot or forward markets the required amounts. At the end of the day all the exchange rates orders are added together and the one-days exposure is calculated. As with SKF, EH hedges 100% of the currency exposure. Internally, for every currency it is set 2,5 % value at risk (VAT) factor, representing the foreign currency transaction magnitude allowed during the day. Trading, loans and commercial portfolios reports are made representing all the foreign currencies transactions made during the day in the case the 2,5 % per day value at risk requirement is exceeded. The companys exposure management system, as it is today, hasnt been changed for latest 4 years. The representative of the company expressed his full satisfaction with it. Forwards are the main instruments the company uses for exposure hedging. Though the representative expresses his willingness to introduce options in to subsidiarys hedging activity more widely. The companys subsidiaries commercial activity within Europe is hedged in the same way as the companys units in Sweden. The Treasury department nets and hedges the exposed amounts according to the received exchange rate orders. The North American and Latin American subsidiaries, due to the big time difference, net and hedge their exposed cash flows in the local financial markets by themselves, providing the center with the financial reports at the end of the month. The currency of denomination is Swedish Krona.The Treasury Department in EH, as well as in SKF, is the cost center. Although in EF the Treasury Departments dealers are sometimes having so called realised and unrealized profit, which appears due to the fact that quite often the exposure bought from the subsidiary for one price (for example in the morning) is hedged later (in the afternoon) for another one. 70% of the companys sales are in foreign currency. The main currencies are USD and EUR. The yearly currency flows for 1999 are shown in the following table: Table No.6 Elof Hansson currency flows

Groups financial policy regarding companys operations financing is predetermined by the specificity of the operations, which mostly are short term. Thereby the biggest part of the loans the company has is from 6 months to 1 year. An analysis of Elof Hanssons financial risk management The organization of the exchange rate risk management is based on the centralization principle and is fully centralized for the Swedish divisions of the company. The most important currency risk to which the Group is exposed is exchange rate exposure. Transaction exposure was defined as one of the highest concerns.

As well as in SKF, in Elof Hansson (EH) in all the currency exchange operations inside the company or with the other companies, exposure measurement (European), hedging and financing operations are made in the internal bank, although there are some significant differences. One of the main ones is that EH is measuring and managing its exposure every day, while at SKF is once a month. EH hedges forevery single contracts (orders) period, SKF for 90 days. The specificity of EH commercial operations no manufacturing, just trading - causes the difference in the treasury departments structure and main activities. EH being a mostly profit seeking middleman has much bigger trading portfolio. Unfortunately, due to the confidentiality of the information, both companies refused to give any real numbers describing trading portfolio. The only information we received was that the portfolio is comparatively big, while in SKF it is comparatively small. The treasury departments main concern in EH is commercial and trading portfolio management, while in SKF just commercial transactions hedging. No trading transactions among subsidiaries are taking place in EH and, therefore, no exchange operations among them are needed. Otherwise the trade with the other

Do cu
94

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

Do cu

The company does not consider translation risk as important and, therefore, does not hedge it. Observations and suggestions for Elof Hansson Different approaches for the initial moment of the exposure management

Observation: In the case of Elof Hansson, we found that they start to hedge the contracts in the Time 2. This means they hedge the contracts (orders) after the contracts (orders) have been confirmed. Suggestion: we have mentioned transactions exposure starting moment in our paper for several times in order to emphasize its importance. An interesting coincidence is that the Elof Hansson transaction exposure hedging starting point is exactly at the same Time 2 that we have suggested for SKF. Although we should keep in mind that the companies are from different industry clusters
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

The pricing strategy the company uses is currency clauses, according to which, in the majority of cases, contractual parties sharing the exchange rate changes risk. No quantity affect strategy takes place. Lets take the companys most common business transaction and try to fit it into the transaction life span as it was described in the figure no.1. EH finds the supplier, which is willing to sell the product for the lower price and the final buyer willing to buy the product for the higher price. Usually, EH is provided with 1-3 month deferred payment and is giving 3-9 month deferred payment to the final buyer. The subsidiary gets the order from the buyer and the same day has to send it to the internal bank for hedging. The quotation exposure starts. Due to the same reasons described in the SKF case, at that moment exchange rate exposure or transaction risk starts. EH, as SKF, is not measuring or managing that risk. If all the parties are satisfied with the business transaction conditions, the final buyer places an order with EH and the Backlog exposure starts. Starting from this moment, cash flow, interest rate and credit risk exposure starts. At this point, we should mention one significant difference between SKF and EH; EHs both cash outflows for imported production as well as cash inflows for the exported production are exchange rate exposed. This means that if the exchange rate changes unfavourably and the company will get less for the production sold, the amount EH has to pay for the supplier should decrease as well, if the time lag between the moments when productions was bought and sold is small and the currency is the same. Thereby, in the described case, EH is naturally hedged. Although if time lag is big (6-9 months) and the currencies EH buying and selling the production are different then if the exchange rate change appears the cash inflow might be smaller and the outflow bigger than expected. Therefore the expected profit might be squeezed to unexpected loss. At the same time, in SKF only cash inflows for production are exposed, because the bigger part of raw material (steel) the company produces by themselves. As we see, EH has to be more attentive and careful with exchange rate risk, since it is exposed to the higher exchange rate risk. EH starts to measure transaction exposure starting from the Time 2 basing on the historical performance. EH is exposed to interest rate risk in the same way as it was described in SKF, therefore, we are not going to describe it. In summary, we could say that EH starts to measure and manage their exchange rate exposure form Time 2 to Time 4 though is not measuring or managing for Time 1.

and, thereby, the most fitting exposure managing moment for one does not necessary means the same for other. Elof Hansson, in comparison with SKF, has a small percentage of long-term customer relationship cases, thus, it turns to be difficult for them to forecast the number of contract signed or deals set. Thereby, we suggest that they should start to hedge starting from the Time 1 for a lower percentage in the quotation period basing on the customers historical performance and hedge 100% after the contracts have been signed, which is in Time 2. Since the company has no historical hedging transactions costs, we were unable to come up with a hedging percentage suggestion.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Observation: Both companies Treasury departments are cost centers. They are not accounting or recording hedging transactions cost. Suggestion: The first observation suggests that EH is more exposed to the exchange rate changes than SKF. Therefore, EHs exposure should be hedged more frequently. Thats the way it actually is. The exposure is measured and managed every day, while in SKF it was once a month. Thus, EH should have much more hedging transactions and as a result higher transactions costs. But, the more frequent the exposure is hedged, the less variability in cash flows and the less probability of unexpected exchange rate changes should occur, which is critical issue for EH. Since, as it was mentioned earlier, EH is more exposed, meaning higher risk of cash flows variability resulted from the exchange rates changes. However the amounts the company are hedging are comparatively smaller than in SKF and that factor should reduce the transactions costs. But, is it the best hedging frequency for the company? It might be possible to get some kind of guidance if we would have historical performance records of hedging transaction costs. Unfortunately, thats not the case. In order to be able to come up with the best fitting hedging period, reasonable basis for the calculation is needed. Therefore, we suggest that the company should start to record their hedging transactions costs. Hedging period and netting strategy Observation: Elof Hansson is hedging on a daily basis. Suggestion: As it was mentioned, EH is hedging every day and is using a netting strategy to manage the transaction exposure. As we know from the basic theory the biggest use of the netting strategy is when there are a lot of opposite way exchange rate transactions; e.i. when one subsidiary requires to sell the same amount and currency the other needs to buy. The longer the period, the bigger the probability of opposite exchange rate transactions and the bigger the use of netting strategy. As was mentioned earlier, the more often we hedge, the more hedging transactions costs we will have. Thus, is it really necessary to hedge transaction exposure every day? In order to get the answer to the question, one would need to measure and compare hedging transactions costs spent with the possible transactions cost if it were hedged just once a week or once a month, and the gain from the avoided transactions costs on the opposite way exchange rate transactions. Especially in cases where a lot of opposite way transactions take

al

Different approaches for the choice of hedging period and transactions cost

95

place, hedging frequency decreasing might be an important issue to think about. Risk sharing, billing currency, VAT Since the risk sharing EH does in the right way we could suggest and the billing currency question does not exist for EH at all, we just will make a summary comparison of the later issues between the companies.The biggest part of the EH contracts are short or medium term for comparatively small amounts and the opposite is at SKF. Production prices in such kind of contracts in EH are floating depending on the changes in the market, while SKF uses mostly fixed prices. EH is widely using currency clauses while SKF is not. Therefore, to our point of view, EH is exploring available pricing strategy, while SKF is not so active in this sphere. Although we should keep in mind that due to the specificity of the SKFs industry, the company has small pricing setting capability, meaning, that there are small chances that the buyers would accept currency clauses or especially floating prices. Since EF is the middleman and does not have any manufacturing the billing currency question is not so important, while in SKF that is one more thing to be improved. The last issue worth to be mentioned is VAT. To our knowledge, only EH is using VAT exchange rate deals limits, which we found an appropriate exchange rate deal for one-day control tool. Hedging frequency related to different industries

Transactions exposure covers the biggest part of the companys

Observation: Elof Hanson as a trading company is taking the role of the middleman, both buying and selling at the same time. That will consequently incur double exposure in terms of import, as well as export exposures, while at SKF, most of the raw materials are produced by the company themselves and, therefore, most exposure happens in the export section.

Do cu
96

Suggestion: For Elof Hanson, it IS more important to control the exposure, since they are exposed from both sides. The company needs more intensive and careful management of the exchange rate risk. Elof Hansson measures the exposure daily, demonstrating that they are paying a great deal of attention to the exchange rate exposure. While, on the other hand, most of the merchandise the company is buying and selling at the same time. As we know, if the company is buying and selling in the same currency and at the same time, then it might appear in the situation of natural hedging. In that case, the frequency of hedging should decrease, since we know that the hedging is not costless. Conclusion In the final part of our work we would like to make a short summary of the main issues we have analyzed and the results we have achieved. In order to fulfill the first part of our purpose we overviewed the existing classifications and terminologies of financial risk and its consisting parts, emphasizing the one we found most useful, i.e. transaction exposure life span. We think that transactions exposure is the most important one for the companies due to the following reasons:

Transaction exposure observes the whole life span of the

companies business transactions from pricing quotation to the final settlement.

ww Co w.p m dfw P iza D rd. F com Tr i


record their hedging transactions costs. EH does it every day
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

exposure since it deals with business transactions, the most fundamental issue in the profit seeking companies activity. After the analysis of the chosen companies we can conclude that there are no general rules for setting the hedging period. Each companys specific characteristics dictate the hedging period requirements. Two big Swedish multinational companies with large open currency positions were chosen in order to fulfill the second part of the purpose. First we overviewed the main characteristics of the companies exchange rate risk management systems in order to be able to apply the theoretical transaction life span on the specific company business transaction. One was SKF, belonging to the ball bearing manufacturing industry, while the other was Elof Hansson (EH), representing the trading sector. Differences between the companies result in different levels of exchange rate risk and, thus, differences in exchange rate management strategies. The industries that the companies belong to predetermine the following differences:
In SKF the biggest part of the bearings are produced from the

self-made Swedish steel. Thereby, it is exchange rate exposed only from the sales, but not costs side, while EH is exposed from both sides; customers, while this is the opposite for EH;

About 50% of SKFs contracts are long term with long standing SKF, in comparison with EH, has small price setting capability.

Nevertheless the companies have some significant similarities concerning exchange rate exposure management: Both companies financial risk management policy defines exchange rate risk exposure as the main; managing the exchange rate exposure main focus is made on the transaction exposure management;
In both companies, exchange rate risk exposure management

is centralized in the headquarters treasury department, so called internal bank, where all the currency exchange operations take place. Subsidiaries deal only with their local currency and dont have any currency risk;

The companies use exposed amounts netting strategy and not Forward contract is both companies main hedging instrument;

However, there are more differences than similarities between the companies exchange rate management systems such as:
SKFs exchange rate exposure hedges every three months, while SKF does not have any limits for the exchange rate deals, while

EH has 2,5 % per day value at risk (VAT) requirement;


SKF, signing long-term big contracts, does not use any kind of

pricing strategy elements, while EH uses currency clauses and floating prices strategies. The companies real transactions exposure management strategies are compared with the theoretical framework. According to the theory, both companies start to hedge their transactions later than the origin of the transaction risk. In theory, transaction risk appears when the seller quotes price for the buyer (Time 1) while SKF

al

11D.571.3

starts to hedge when the seller ships products and bills the buyer (Time 3) and EHs starting point is when the buyer places a contract (order) with the seller (Time 2). A short summary of the companies above described financial risk management characteristics is given in the following table (table no. 7). Table No.7 Summary of companies financial risk management

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

The hedging period, especially for SKF, has one more aspect. The company is hedging every three-months. So, at the end on the hedged period they are faced with the big risk of being exposed to exchange rate changes, if any. Therefore, the conclusion and suggestion at the same time is that, in the case of SKF, they would have less risk if they were to hedge every month. EH hedges transaction exposure every day using the netting strategy. Too high hedging frequency might result in a loss of netting system advantage. Therefore, we think that EH should consider the hedging period revision, especially if one day oneway foreign exchange deals are requited and the second day the backwards streams are coming. One more important thing to mention is the one-day currency exchange deals limit in order to avoid the big fluctuation in firms cash outflow and inflow. In EHs value at risk (VAT) one day limit, we have a very good example of how a firms speculative currency position might be controlled. On the other hand, it will raise other question as to how to set such a limit, which might be an interesting question for further research in this field. Last conclusion is about the pricing strategy. Even if it is difficult for the firm (SKF) to implement pricing strategy elements, from the theoretical point of view, it is a useful tool to decrease transaction exposure. Such pricing strategys elements as risk sharing or floating price clauses might be very useful when the firms planning to enter a new market in developing countries, where the risk is very high. Therefore, at least a trial to implement mentioned pricing strategy tools might end up with a significant exchange rate risk decrease. Finally, we would like to point out the main knowledge we gained from this work is that there is no general transaction exposure management rule that could be applicable to all the companies. Every company has its own specific characteristic, which depends on a lot of different macroeconomic factors. The comparison of the companies transaction management strategies provides the companies with the exceptional opportunity to get a clear and detailed picture of the other companys transaction management strategy. Such information is usually not publicly announced; therefore, the companies have an excellent chance to learn from each other. Notes:

Do cu
11D.571.3

At this stage, the first important step for the firm is to confirm the starting moment of the hedging strategy, based on the firms operating characteristics. Since SKF has a big percentage of long-term contracted relationship, it becomes very important to set a certain hedging percentage in the quotation period and use 100 percent hedging after the contract signing. Otherwise, they are exposed to exchange rate risk during these periods. Since EH has small percentage of fixed long term contracted relationships, the hedging after the contract (order) has been confirmed is the most important one. The possibility to forecast future cash flows before that moment is very small. Thats why the hedging before the contract is placed might turned out to be more costly then useful, although in theory the company is incurring risk from Time 1 to Time 2. So in this sense, different firms might have different most fitting starting moment of hedging depending on this net of specific features.

One more important conclusion we made is about natural hedging possibilities. EH can reach a natural hedge of transaction exposure, as a middleman if it buys and sells in the same currency at the same time, while SKF can reach it through the operations diversification among foreign subsidiaries.

The third important conclusion concerns the hedging period and the transaction cost balancing. Neither SKF nor EH recognized the importance of transaction cost recording. Since the longer the hedging period, the less transaction cost will incur; while on the other hand, there will appear a bigger possibility of the firms cash flows fluctuation, which, as a consequence, will effect the firms future value. So, it is very important to keep a record of transaction costs, because that gives the possibility to compare different periods hedging transactions costs in order to find the breakeven point for the most fitful hedging period.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

97

life of the option

When the formula is applied to these variables, the resulting figure is called the theoretical fair value of the option. PRICING MODELS USED BY THE MARKET There are two main models used in the market for pricing equity options: the Binomial model and the Black Scholes model . THE BINOMIAL OPTION-PRICING MODEL
Introduction

Thus far, we have only been able to place a lower and upper bound around the value of an option prior to its expiration. To produce an exact formula, we will need to make specific assumptions about the way the underlying asset price and riskless return evolve over time.

Do cu
98

We begin by making the simplest possible, but still interesting, assumption governing this uncertainty: the option expires after a single period (of known but arbitrary duration) in which the underlying asset price moves either up to a single level or down to a single level. In addition to being able to invest in a European option, we can also invest in its underlying asset and cash. This approach, when generalized to accommodate many periods is known as the standard binomial option-pricing model.

We assume that there are no riskless arbitrage opportunities, first between the underlying asset and cash; and second between the option and the underlying asset. In that case, the prices of these three securities must be set as if their payoffs were discounted back to the present using the same two state-contingent prices. Expressed mathematically, we have three equations (one for the asset, one for cash, and one for the option) in two unknowns. As a result, we can solve the first two equations for the two statecontingent prices. Finally, knowing these state-contingent prices and using the third equation, we can write down a formula for the current option value as a function of its current underlying asset price and the riskless return. Option Pricing Formula The option-pricing problem we now address is to find an exact formula or method, which transforms the current underlying asset price S and the current time-to-expiration t into a standard options current value. Among the six fundamental

ww Co w.p m dfw P iza D rd. F com Tr i


S C max [0, S - K, Sd -t - Kr-t] Similarly, for an American put: K P max [0, K - S, Kr-t - Sd -t] Sd -t C max [0, Sd -t - Kr-t] Kr-t P max [0, Kr-t - Sd -t]
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 19: OPTION PRICING MODELS


An option-pricing model is a mathematical formula or model into which you insert the following parameters:
Underlying stock or index price Exercise price of the option Expiry date of the option Expected dividends (in cents for a stock, or as a yield for an

index) to be paid over the life of the option


Expected risk free interest rate over the life of the option Expected volatility of the underlying stock or index over the

For calls at expiration, we already know the answer: C* = max [0, S* - K]; and similarly for puts, P* = max [0, K - S*]. The unanswered question is what formula to use prior to expiration. Simple arbitrage arguments tell us at least that, prior to expiration, an American call value C must be less than the asset price S, but more than the calls current exercisable value max [0, S - K] and also more than its present value max [0, Sd -t - Kr-t] when volatility is zero. In summary, For example, if S = K = 100, r = 1.08, d = 1.03, and t = 1, this places only very loose bounds on the call value, 100 C 4.49.

For European calls and puts, while the lower bounds must be loosened, the upper bounds can be tightened:

Single Period Model Black and Scholes used a replicating portfolio argument to derive their option pricing formula. To mimic that argument with a binomial model, we form a portfolio consisting of delta units of the underlying asset and an investment in cash, such that the portfolio has payoffs equal to the value of the option in each of the two possible states at the end of the period. In this analysis, we also account for payouts, allowing for the option not to be payout-protected. If there are no riskless arbitrage opportunities, the current cost of constructing the replicating portfolio must equal the cost of the option. This leads to a simple single-period formula for the current value of the option indeed, the very same formula that was derived earlier via state-contingent prices. This satisfies our goal of finding an exact option pricing formula prior to expiration under conditions of uncertainty. Despite its simplicity, it reveals many of the economic ideas that lie behind modern option pricing theory. First, the current value of the option is given by a formula that depends on the concurrent underlying asset price, the strike price, the volatility (as proxied for by the sizes of the up and down moves of the underlying asset), the riskless return and the payout return. Second, investors are assumed only to act in the market to eliminate riskless arbitrage opportunities. They need not be risk-averse or even rational.

al

determinants of option values asset price, strike price (K), time-to-expiration, riskless return (r), volatility (s), and payout return (d) these two are singled out because they must necessarily change as the expiration date approaches. In brief, we search for a function f of S and t, where the other determinants enter as fixed parameters, which equal the concurrent option value C or P.

11D.571.3

Significantly, the formula says the option should be priced by discounting its risk-neutral expected value at the end of the period, where the discount rate is the riskless return, and where the risk-neutral probabilities have a simple well-defined form, determined solely by the riskless return, payout return, and the up and down move sizes. If the option is American, the valuation formula is only slightly more complex: the option is worth either its current exercisable value or its holding value, whichever is greater. The simplicity of the analysis seems to depend on the assumption of binomial underlying asset price movements. If, instead, the asset price could move to three possible levels, no replicating portfolio (involving solely the underlying asset and cash) could match the future values of the option. However, most of the force of this objection can be removed, as we shall see, by generalizing the model to many periods. Binomial Formula Interpretation C = [p Cu + (1 - p) Cd ] / r
Assumptions:

end of the tree, being careful at each node, for American options, to consider the possibility of early exercise, then calculate the current option value. For a European option, using the risk-neutral valuation principle, a shortcut is available. We simply calculate its discounted risk-neutral expected expirationdate payoff. With a little algebra, we can derive a single-line formula for the current value of a European option even though it expires an arbitrarily large number of periods later. We use a series of examples to illustrate this combination of working forward to construct the binomial tree of asset prices and then working backward to derive the current option value for European and American calls and puts, with and without payouts. We then discuss some curious properties of binomial trees based on the ideas of sample paths and path-independence. It is fortunate that the binomial option pricing model is based on recombining trees, otherwise the computational burden would quickly become overwhelming as the number of moves in the tree is increased. All sample paths that lead to the same node in the tree have the same risk-neutral probability. The types of volatility objective, subjective and realized which in real life are usually different, are indistinguishable in our recombining binomial tree. Finally, in the continuous-time limit, as the number of moves in the tree (for a fixed time-to-expiration) becomes infinite, the sample path, though itself continuous, has no first derivative at any point. We showed earlier that the term structure of spot and forward returns could be inferred from the concurrent prices of otherwise identical bonds of different maturities. In a similar manner, the inverse problem for binomial trees can also be solved; that is, we can infer a binomial tree from the concurrent prices of otherwise identical European options with different strike prices. This is called an implied binomial tree. Volatility in Binomial Trees In most economic situations involving a random variable, there are three types of volatility: 1. The objective population volatility: the true volatility of the random variable true in the sense that if history could be rerun many times, on average the realized volatility of the random variable would tend to converge to this volatility. 2. The subjective population volatility: the volatility believed by the relevant agents to govern the random variable that is, their best guess about the objective population volatility. 3. The realized sample volatility: the historically measured volatility of the realized outcomes of the random variable along its realized sample path. In the standard binomial option pricing model, these three are identical. It is assumed that all investors believe in the same binomial tree. That is, they all believe that the underlying asset price follows a binomial movement. They all believe that the resulting tree is recombining, so that an up followed by a down move leads to the same outcome as a down move followed by an up move. And they have the same estimate of the possible up and down moves at every point in the tree. Indeed, were this not the case, then two investors would value an option differently, so that whatever the market price of the option, at least one of them would believe there were a riskless arbitrage opportunity. Since we

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Where, p = [{(r / d) d} / (u - d)]

1. Exact formula for the value of an option prior to expiration. 2. Option value depends only on S, K, u, d, r and d.

3. Option value depend only on one random variable: underlying asset price.

4. Investor motivation: eliminate arbitrage opportunities, neither rationality nor risk aversion required.

Do cu
11D.571.3

Several comments are in order. It was easy to write down the formula for the value of a call at expiration (max [0, S* - K]); now we have the formula for the value of a call prior to expiration in terms of its possible values C u and Cd one period later. If this were exactly one period before expiration, this formula clearly depends only on S, K, u , d, r and d (S and K through payoffs Cu = max [0, u S - K] and Cd = max[0, dS - K]). Interpreting the spread between u and d as a proxy for asset volatility, these variables, along with the time-to-expiration, are the fundamental determinants of option prices.

In any model in the social sciences, it is prudent to ask what is being assumed about human behavior and psychology. In this case, we only assume that investors price securities so that there are no riskless arbitrage opportunities. This arose in our derivation when we assumed that the riskless return was bracketed by the total return of the underlying asset and when we assumed that the current cost of the option and its replicating portfolio must be the same. Interestingly, we have not assumed (as is common in many models of pricing in financial economics) that investors are risk-averse, or indeed that they are even rational in the economists sense of making transitive choices (if an investor prefers A to B and B to C, then he prefers A to C). Multi Period Model The principal defect of the single-period binomial optionpricing model is overcome by extending it to many periods by constructing a recombining binomial tree of asset prices working forward from the present. One path through the tree represents a sample drawn from the universe of possible future histories. Inverting this process and working backward from the

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

99

rule this out, in effect, we are assuming that volatilities (1) and (2) are the same. Moreover, investors all think the next up and down moves at every node in the tree will be the same everywhere in the tree, and that u = 1/ d. Thus log u = - log d, so that (log u )2 = (log d)2. This means that along any path in the tree the sampled (logarithmic) volatility around a zero mean will be the same. For example, consider two paths in a five move tree: u , d, d, u , d and d, d, u , u , u . The sample variance of the first path is: [(log u )2 + (log d)2 + (log u )2 + (log u )2 + (log d)2]/5 = (log u )2 The sample variance of the second path is: [(log d)2 + (log d)2 + (log d)2 + (log u )2 + (log u )2]/5 = (log u )2 This is an extraordinary situation. In real life, realized history can be interpreted as a sample from a population of possible histories. It would be strange indeed if each sample were guaranteed to have the same volatility computed from its time-series of events. Hedging We can use binomial trees not only to value options, but also to determine the sensitivity of these values to key determining variables: underlying asset price, time-to-expiration, volatility, riskless return and payout return.

Delta is the sensitivity of current option value to its current underlying asset price. It is easily calculated from a binomial tree. While working backward, stop one move before reaching the beginning of the tree and collect the two nodal values. The delta is their difference divided by the corresponding difference in underlying asset prices including payouts. Gamma measures the rate at which the delta changes as the underlying asset price changes. This is also easily calculated from a binomial tree, but by stopping two periods before the beginning. It indicates at which points during the life of an option replication will be particularly difficult in practice.

Do cu
100

Theta measures the sensitivity of the current option value to a reduction in time-to-expiration. Again, it is also easily calculated from a binomial tree by comparing two adjacent option values computed when the underlying asset price is the same. Vega, rho and lambda measure the sensitivity of current option value to changes in volatility, the riskless return and the payout return, respectively. To calculate these, two current option values are compared from two otherwise identical binomial trees, except that they are based on slightly different volatilities, riskless or payout returns.

Similar to bond duration, fugit measures the risk-neutral expected life of an option, accounting for reduction in its life from early exercise. This too can be calculated by working backward in the binomial tree. THE BLACK SCHOLES MODEL The Black and Scholes Option Pricing Model involved calculating a derivative to measure how the discount rate of a warrant varies with time and stock price.

ww Co w.p m dfw P iza D rd. F com Tr i


Assumptions of the Black and Scholes Model:
Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

The stock pays no dividends during the options life: Most companies pay dividends to their shareholders, so this might seem a serious limitation to the model considering the observation that higher dividend yields elicit lower call premiums. A common way of adjusting the model for this situation is to subtract the discounted value of a future dividend from the stock price. European exercise terms are used: European exercise terms dictate that the option can only be exercised on the expiration date. American exercise term allow the option to be exercised at any time during the life of the option, making American options more valuable due to their greater flexibility. This limitation is not a major concern because very few calls are ever exercised before the last few days of their life. This is true because when you exercise a call early, you forfeit the remaining time value on the call and collect the intrinsic value. Towards the end of the life of a call, the remaining time value is very small, but the intrinsic value is the same. Markets are efficient: This assumption suggests that people cannot consistently predict the direction of the market or an individual stock. The market operates continuously with share prices following a continuous It process. To understand what a continuous It process is, you must first know that a Markov process is one where the observation in time period t depends only on the preceding observation. An It process is simply a Markov process in continuous time. If you were to draw a continuous process you would do so without picking the pen up from the piece of paper. No commissions are charged: Usually market participants do have to pay a commission to buy or sell options. Even floor traders pay some kind of fee, but it is usually very small. The fees that Individual investors pay is more substantial and can often distort the output of the model. Interest rates remain constant and known: The Black and Scholes model uses the risk-free rate to represent this constant and known

al
11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

rate. In reality there is no such thing as the risk-free rate, but the discount rate on U.S. Government Treasury Bills with 30 days left until maturity is usually used to represent it. During periods of rapidly changing interest rates, these 30-day rates are often subject to change, thereby violating one of the assumptions of the model. Returns are log normally distributed: This assumption suggests, returns on the underlying stock are normally distributed, which is reasonable for most assets that offer options. Greeks Delta

Delta is a measure of the sensitivity the calculated option value has to small changes in the share price. Gamma

Gamma is a measure of the calculated deltas sensitivity to small changes in share price. Theta

Theta measures the calculated option values sensitivity to small changes in time till maturity. Vega

Do cu
Rho
11D.571.3

Vega measures the calculated option values sensitivity to small changes in volatility.

Relationship between Call Premium & Underlying Stocks Prices These following graphs show the relationship between a calls premium and the underlying stocks price. The first graph identifies the Intrinsic Value, Speculative Value, Maximum Value, and the Actual premium for a call.

ww Co w.p m dfw P iza D rd. F com Tr i


Graph #1 Graph #2
Copy Right: Rai University

The following 5 graphs show the impact of diminishing time remaining on a call with: S = $48 E = $50 r = 6% sigma = 40% Graph # 1, t = 3 months Graph # 2, t = 2 months Graph # 3, t = 1 month Graph # 4, t = .5 months Graph # 5, t = .25 months

al

101

Graph #3

Do cu
102

Graphs # 6 - 9, show the effects of a changing Sigma on the relationship between Call premium and Security Price. S = $48, E = $50, r = 6%, sigma = 40% Graph # 6, sigma = 80%, Graph # 7, sigma = 40% Graph # 8, sigma = 20%, Graph # 9, sigma = 10%

ww Co w.p m dfw P iza D rd. F com Tr i


Graph #7 Graph #5 Graph #8
Copy Right: Rai University 11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Graph #6

Graph #4

al

Graph #9

Although the fair value may be close to where the market is trading, other pricing factors in the marketplace mean fair value is used mostly as an estimate of the options value. Moreover, fair value will depend on the assumptions regarding volatility levels, dividend payments and so on that are made by the person using the pricing model. Different expectations of volatility or dividends will alter the fair value result. This means that at any one time there may be many views held simultaneously on what the fair value of a particular option is. In practice, supply and demand will often dictate at what level an option is priced in the marketplace. Traders may calculate fair value on a option to get an indication of whether the current market price is higher or lower than fair value, as part of the process of making a judgement about the market value of the option.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Hedging Delta measures the sensitivity of an option value, ceteris paribus, to a small change in its underlying asset price. So it makes sense to calculate the delta by taking the first partial derivative of the option value, as expressed by the Black-Scholes formula, with respect to the underlying asset price. Other hedging parameters including gamma and vega, can also be derived from the Black-Scholes formula by taking the appropriate partial derivatives.

We can also use the Black-Scholes formula to measure the local risk of an option as measured by its own volatility or its beta. To do this, we apply the simple result that the local option volatility or beta equals the volatility or beta of its underlying asset scaled by the option omega. For some purposes, we may also want to measure global properties of an option that apply on average over its remaining life. As an example, we show that the expected return of an option over all or some portion of its life can be easily calculated by reinterpreting the Black-Scholes formula.

Do cu
11D.571.3

Commonly, several different options but with the same underlying asset are simultaneously held in a portfolio. The delta of such a portfolio measures the amount by which its value changes for a small increase in the underlying asset price. Fortunately, having calculated the deltas of the individual options in the portfolio, the delta of the portfolio as a whole is calculated from a simple weighted average of the constituent option deltas. A similar additivity property also applies to gamma.

One application of portfolio deltas is to the construction of option portfolios, which are almost insensitive to movements in its underlying asset price. Such delta-neutral portfolios are useful for option market makers, who must take positions in options but do not want to risk losses because of unfavorable asset price changes. Investors who believe they can identify options which are mispriced relative to each other but who have no opinion about the direction of changes in the underlying asset price also use them. The relationship between fair value and market price

ww Co w.p m dfw P iza D rd. F com Tr i


Notes:
Copy Right: Rai University

Volatility The volatility figure input into an option-pricing model reflects the assumptions of the person using the pricing model. Volatility is defined technically in various ways, depending on assumptions made about the underlying assets price distribution. For the regular option trader it is sufficient to know that the volatility a trader assigns to a stock reflects expectations of how the stock price will fluctuate over a given period of time. Volatility is usually expressed in two ways: historical and implied. Historical volatility describes volatility observed in a stock over a given period of time. Price movements in the stock (or underlying asset) are recorded at fixed time intervals (for example every day, every week, or every month) over a given period. More data generally leads to more accuracy. Be aware that a stocks past volatility may not necessarily be reproduced in the future. Caution should be used in basing estimates of future volatility on historical volatility. In estimating future volatility, a frequently used compromise is to assume that volatility over a coming period of time will be the same as measured/historical volatility for that period of time just finished. Thus if you want to price a three month option, you may use three month historical volatility. Implied volatility relates to the current market for an option. Volatility is implied from the options current price, using a standard option-pricing model. Keeping all other inputs constant, you can put the current market price of an option into any theoretical option price calculator and it will calculate the volatility implied by that option price.

al

103

This module focuses on the mechanics of forward and futures contracts. There is a particular emphasis on the interrelationship between the various contracts and the spot price of the underlying asset. The spot price is the price of an asset where the sale transaction and settlement is to occur immediately. Forward Contracts
The Mechanics of Forward Contracts

Do cu

A Forward Contract is a contract made today for delivery of an asset at a prespecified time in the future at a price agreed upon today. The buyer of a forward contract agrees to take delivery of an underlying asset at a future time, T , at a price agreed upon today. No money changes hands until time T . The seller agrees to deliver the underlying asset at a future time, T , at a price agreed upon today. Again, no money changes hands until time T . A forward contract, therefore, simply amounts to setting a price today for a trade that will occur in the future. Example 4.4 illustrates the mechanics of a forward contract. Since forward contracts are traded over-the-counter rather than on exchanges, the example illustrates a contract between a user and a producer of the underlying commodity. Example: Forward contract mechanics. A wheat farmer has just planted a crop that is expected to yield 5000 bushels. To eliminate the risk of a decline in the price of wheat before the harvest, the farmer can sell the 5000 bushels of wheat forward. A miller may be willing to take the other side of the contract. The two parties agree today on a forward price of 550 cents per bushel, for delivery five months from now when the crop is harvested. No money changes hands now. In five months, the farmer delivers the 5000 bushels to the miller in exchange for $27,500. Note that this price is fixed and does not depend upon the spot price of wheat at the time of delivery and payment. Valuation of Forward Contracts

Forward contracts can be valued by recognizing that, in many cases, forward markets are redundant. This occurs when the payoff from a forward contract can be replicated by a position in (1) the underlying asset and (2) riskless bonds. Before illustrating this concept, we define the cost of carry of the underlying commodity.

104

ww Co w.p m dfw P iza D rd. F com Tr i


S0 the spot price now, which is known. F The forward price for delivery at time T .
Position Buy one unit of commodity Pay Cost of Carry Borrow Enter 6-month forward sale Net Portfolio Value -S 0 ST 0 - S0 (eqT - 1 ) S 0 eqT 0 0 -S0 e(q+r)T F - ST F - S 0 e (q+r)T

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 20: HEDGING WITH FORWARD / FUTURE CONTRACTS


Introduction Despite the recent adverse press they have received, derivative securities provide a number of useful functions in the areas of risk management and investments. In fact, derivatives were originally designed to enable market participants to eliminate risk. A wheat farmer, for example, can fix a price for his crop even before it is planted, eliminating price risk. An exporter can fix a foreign exchange rate even before beginning to manufacture the product, eliminating foreign exchange risk. If misused, however, derivative securities are also capable of dramatically increasing risk. This is the cost involved in holding a physical quantity of the commodity. For wheat the cost of carry is a storage cost, for live hogs it consists of storage and feed costs, and for gold it consists of storage and security costs. Some commodities have a negative cost of carry. For example, holding a stock index provides the benefit of receiving dividends. In forward markets it is common to express the cost of carry as a continuously compounded annual rate, payable at inception. For example, if the cost of carry for wheat is reported to be 5%, this would mean that the cost of storing $100 of wheat for six months is $100 (e0.05(0.5)-1) = $2.53, payable immediately. We use the following notation, which is common in forward markets: St The spot price of the underlying commodity at time t ST The spot price at maturity of the contract and is not known when the contract is entered into. r The riskless rate of interest from now until maturity of the contract, q The cost of carry of the underlying commodity Both r and q are expressed as continuously compounded annual rates. Consider the strategy of:
Borrowing enough money to buy one unit of a commodity

and to pay for the associated carrying costs through time T , and T . The value of this position in terms of the initial (time 0) and terminal (time T ) cash flows is tabulated in the following table.

Entering into a forward contract to sell the commodity at time

Arbitrage relationship between spot and forward contracts


Initial Cash Flo w Terminal Cash Flow

Since this portfolio requires no initial cash outlay, the absence of arbitrage opportunities will ensure that the terminal payoff is also zero. Therefore, the futures contract can be valued as F = S0 e(q+r)T The following example shows how arbitrage is possible if this pricing relation is violated. Example: Forward arbitrage. Suppose the spot price of wheat is 550 cents per bushel, the sixmonth forward price is 600, the riskless rate of interest is 5% p.a.,
11D.571.3

Copy Right: Rai University

al

and the cost of carry is 6% p.a. To execute an arbitrage, you borrow money, buy a bushel of wheat, pay to store it, and sell it forward. The cash flows are:
Position Initial Cash Flow (e0.06x0.5 Terminal Cash Flow ST 1) 0 e0.06x0.5 -550 600 - ST 600-550 e(0.06+0.05)0.5 = 18.90 Buy one unit of commodity -550 Pay Cost of Carry -550 Borrow 550 Enter 6-month forward sale 0 Net Portfolio Value 0

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

e(0.06+0.05)T

That is, it is possible to lock in a sure profit that requires no initial cash outlay. Hedging With Forward Contracts The primary motivation for the use of forward contracts is risk management. The wheat farmer in above example was able to eliminate price risk by selling his crop forward. Example contains a more comprehensive example concerning foreign exchange risk management.

A futures contract is similar to a forward contract except for two important differences. First, intermediate gains or losses are posted each day during the life of the futures contract. This feature is known as marking to market. The intermediate gains or losses are given by the difference between todays futures price and yesterdays futures price. Second, futures contracts are traded on organized exchanges with standardized terms whereas forward contracts are traded over-the-counter (customized one-off transactions between a buyer and a seller). Above Example illustrates the marking to market mechanics of the All Ordinaries Share Price Index (SPI) futures contract on the Sydney Futures Exchange. The SPI contract is similar to the Chicago Mercantile Exchange (CME) S&P 500 contract and the London International Financial Futures Exchange (LIFFE) FTSE 100 contract. The mechanics are the same for all of these contracts. Stock index futures were introduced in Australia in 1983 in the form of Share Price Index (SPI) futures which are based on the Australian Stock Exchanges (ASX) All Ordinaries Index which is the benchmark indicator of the Australian stock market. Users of SPI futures include major international and Australian banks, fund managers and other large investment institutions. SFE locals and private investors are also active participants in the market. SPI futures have an underlying of A$25 x Index (i.e., a SPI futures contract with a price of 2000.00 will have a contract value of A$50,000). The All Ordinaries Share Price Index (AOI) is a capitalization weighted index and is calculated using the market prices of approximately 318 of the largest companies listed on the Australian Stock Exchange (ASX). The aggregate market value of these companies totals over 95% of the value of the 1,186 domestic stocks listed. Example: Marking to market. Suppose an Australian futures speculator buys one SPI futures contract on the Sydney Futures Exchange (SFE) at 11:00am on June 6. At that time, the futures price is 2300. At the close of trading on June 6, the futures price has fallen to 2290 (what causes futures prices to move is discussed below). Underlying one futures contract is $25 x Index, so the buyers position has changed by $25(2290-2300)=-$250. Since the buyer has bought the futures contract and the price has gone down, he has lost money on the day and his broker will immediately take $250 out of his account. This immediate reflection of the gain or loss is known as marking to market. Where does the $250 go? On the opposite side of the buyers buy order, there was a seller, who has made a gain of $250 (note that futures trading is a zero-sum game - whatever one party loses, the counter party gains). The $250 is credited to the sellers account. Suppose that at the close of trading the following day, the futures price is 2310. Since the buyer has bought the futures and the price has gone up, he makes money. In particular, $25(23102290)=+$500 is credited to his account. This money, of course, comes from the sellers account. This concept of marking to market is standard across all major futures contracts. Contracts are marked to market at the close of trading each day until the contract expires. At expiration, there are two different mechanisms for settlement. Most financial futures (such as stock index, foreign exchange, and interest rate futures)

Example: Forward contacts and risk management. XYZ is a multinational corporation based in the US. Its manufacturing facilities are located in Pittsburgh and hence its labor and manufacturing costs are incurred in US dollars (USD). A large fraction of its sales, however, are made to German customers who pay for the goods in Deutschemarks (GDM).

There is a six-month lead-time between the placement of a customer order and delivery of the product. XYZs cost of production is 80% of the sale price. Suppose XYZ receives a $1MM GDM order and that the current USD/GDM exchange rate is 0.60 (i.e. 1 GDM = 0.60 USD). The cost of production of this order is $480,000 (0.60 x $1MM x 0.80). The exchange rate six months from now is, of course, uncertain in which case XYZ is exposed to exchange rate risk. If the exchange rate stays at 0.60, then XYZ will convert the 1MM GDM to $600,000 and earn a 25% profit on the $480,000 cost of production. If, however, the exchange rate falls to 0.40 six months from now, XYZ will convert the 1MM GDM to only $400,000, registering a loss on the sale.

Conversely, if the exchange rate rises to 0.80 six months from now, XYZ will convert the 1MM GDM to $800,000, registering a very large profit on the sale. Whereas XYZ are very good at manufacturing and marketing their product, they have no expertise in forecasting exchange rate movements. Therefore, they want to avoid the exchange rate risk inherent in this transaction (i.e., the risk that they do everything right and then lose money on the sale, solely because exchange rates move against them). They can do this by selling forward 1MM GDM. This involves entering a contract today with, say, an investment bank under which XYZ agrees to deliver 1MM GDM six months from now in exchange for a fixed number of US dollars. This rate of exchange is the sixmonth forward rate. Suppose the six-month forward rate is 0.62 (which is set according to market expectations and relative interest rates as described below). Then, when XYZ receives 1m GDM from its customer, they deliver it to the investment bank in exchange for $620,000 (locking in a profit) regardless of whether the exchange rate happens to be 0.40 or 0.80 at that time. Futures Contracts
The Mechanics of Futures Contracts
11D.571.3

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

105

are cash settled, whereas most physical futures (agricultural, metal, and energy futures) are settled by delivery of the physical commodity. Below Example illustrates cash settlement. Example: Cash settlement. Suppose the SPI futures contact price was 2350 at the close of trading on the day before expiration and 2360 at the close of trading on the expiration day. Settlement simply involves a payment of $25(2360-2350) = $250 from the sellers account to the buyers account. The expiration day is treated just like any other day in terms of standard marking to market. An alternative to cash settlement is physical delivery. Consider the SFE wool futures contract, which requires delivery of 2500 kg of wool when the contract matures. Of course, there are different grades of wool, so a set of rules governing deliverable quality is required. These are detailed rules that govern the standard quality of the underlying commodity and a schedule of discounts and premiums for delivery of lower and higher quality respectively. Below Example illustrates the rules governing deliverable quality for the SFE greasy wool futures contract. Example: Deliverable quality: Greasy wool futures. Delivery must be made at approved warehouses in the major wool selling centers throughout Australia. For wool to be deliverable, it must possess the relevant measurement certificates issued by the Australian Wool Testing Authority (AWTA) and appraisal certificates issued by the Australian Wool Exchange Limited (AWEX). In particular, it must be good top making merino fleece with average fibre diameter of 21.0 microns, with measured mean staple strength of 35 n / ktx, mean staple length of 90mm, of good color with less than 1.0% vegetable matter. Because any particular bale of wool is unlikely to exactly match these specifications, wool within some prespecified tolerance is deliverable. In particular, 2,400 to 2,600 clean weight kilograms of merino fleece wool, of good top making style or better, good colour, with average micron between 19.6 and 22.5 micron, measured staple length between 80mm and 100mm, measured staple strength greater than 30 n/ktx, less than 2.0% vegetable matter is deliverable. Premiums and discounts for delivery that does not match the exact specifications of the underlying contract are fixed on the Friday prior to the last day of trading for all deliverable wools above and below the standard, quoted in cents per kilogram clean.

Do cu

Below Example illustrates the process of physical delivery for the SFE greasy wool futures contract. The process is similar for most commodity futures contracts. Example: Physical delivery. Suppose the greasy wool futures contact price was 700 cents at the close of trading on the expiration day. Settlement involves physical delivery, from the seller of the futures contract to the buyer, of the underlying quantity of wool (2,500 kilograms) on the business day following the expiration day. Delivery, therefore, involves the seller delivering 2,500 kg of wool to the buyer, in return for a payment of A$17,500. The wool must be within the tolerance described above. If the wool is of better quality than is specified in the contract, a premium must be paid. Conversely wool of lower quality involves a discount. It is the seller of the futures who must make delivery of the wool

106

ww Co w.p m dfw P iza D rd. F com Tr i


Value Time Futures Contract 0 1 2 3 25,000 24,000 22,000 24,500 of Margin Margin Balance before Call Calls 0 7,000 0 6,000 4,000 7,000 7,000 6,000 7,000 7,000 3,000 0
Time 0 1 2 0 FU 2-FU1 ... T Forward Cash Flow Futures Cash Flow 0 0 0 FU1 -FU 0

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

and he has the option to choose what quality he will deliver, subject to the schedule of discounts and premiums. Margin Although futures contracts require no initial investment, futures exchanges require both the buyer and seller to post a security deposit known as margin. Margin is typically set at an amount that is larger than usual one-day moves in the futures price. This is done to ensure that both parties will have sufficient funds available to mark to market. Residual credit risk exists only to the extent that (1) futures prices move so dramatically that the amount required to mark to market is larger than the balance of an individuals margin account, and (2) the individual defaults on payment of the balance. In this case, the exchange bears the loss so that participants in futures markets bear essentially zero credit risk. Margin rules are stated in terms of initial margin (which must be posted when entering the contract) and maintenance margin (which is the minimum acceptable balance in the margin account). If the balance of the account falls below the maintenance level, the exchange makes a margin call upon the individual, who must then restore the account to the level of initial margin before the start of trading the following day. Below Example illustrates the margining procedure.
Example: Margin Suppose a contract requires initial margin of $7,000 and maintenance margin of $5,000. The following table illustrates the margining procedure and the cash flows required for the buyer of a futures contract. Margin Balance after Calls

Note that when the margin balance falls below the maintenance margin, it must be restored to the initial level. Note also that when the future moves favorably (as at time 3) the marking to market cash inflow can be immediately withdrawn - it need not remain in the margin account.

Valuation of Futures Contracts Whereas the valuation of forward contracts is relatively straightforward, the marking to market feature complicates the valuation of futures contracts. The cash flows associated with forward and futures contracts are illustrated in the following table. Cash Flows of Forward and Futures Contracts
... S T-FO 0 ... F UT - FUT- 1

For both contracts, no money changes hands at the time the contract is initiated (time 0). For the forward contract, no money changes hand until the contract matures (time T ). For the futures contract, money changes hands daily depending upon movements in the futures price. In some circumstances, however, a futures contract is perfectly equivalent to a forward contract in which case the two contracts must have the same value. Since forward contracts are relatively easy to value using a no-arbitrage argument, this provides a convenient way of valuing a futures contract. In particular, if interest

Copy Right: Rai University

al

11D.571.3

rates are constant (at a continuously compounded annual rate of r) over the life of the contract then the prices of the futures contract and the forward contract are identical. This equivalence can be established by considering a roll-over strategy whereby at time 0 an investor purchases er futures contracts and invests FU0 in a riskless bond where FU represents the futures price). At time 1 the profit (possibly negative) on the futures position is er(FU1-FU0) , which he invests (or borrows) until maturity. At maturity, this has grown to er(T-1)er[FU1-FU0)] = e rT(FU1FU0) . At time 1 he increases his holding to e contracts. At time 2 the profit (possibly negative) on this position is e2r(FU2-FU1) , which he invests (or borrows) until maturity. At maturity, this has grown to er(T-2)e2r(FU2-FU1) = e rT(FU2-FU1) . At time 2 he increases his holding to e3r and so on. At maturity, the total payoff on the futures position is:
2r

contrast to the previous examples that involved a cost of carry, holding the S&P 500 index yields a benefit, in the form of dividends received, rather than a cost of carry. The result is that the value of an S&P 500 futures contract can be expressed as F = S 0 e (r-d)T where F S0 The Futures price The current value of the S&P 500 stock index

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

r The interest rate (annual continuously compounded TBill rate)

The time to maturity of the contract

e rT [(FU1 - FU0) + (FU2 - FU1) + ... + (FUT - FUT-1)] = e rT (S T - FU0)

where we note that FUT = S T. The payoff on the bond is FU0 e rT. Therefore, the overall initial investment required for this strategy is $FU0 and the overall payoff at time T is S 0erT. Now consider the strategy of buying erT forward contracts on day o and investing $FO 0 in a riskless bond (where FO represents the price of a forward contract). The overall initial investment required for this strategy is $FO 0 and the overall payoff at time T is: erT(ST - FO0) + FO0 erT = S0 erT Since both of these strategies have the same payoff, they must cost the same. That is FO0 = FU0. The following table illustrates the cash flows associated with the two strategies. Forward-Futures Equivalent
Time Net Cash Flow

ww Co w.p m dfw P iza D rd. F com Tr i


Forward Position Futures Position -FO 0 -FU 0 e2r

This is the same as equation (1) except that +q has been replaced by -d as the cost of carry (storing wheat) has been replaced by a benefit (dividends). To see why this relationship must hold, consider the strategy of (1) borrowing e-dTS0 through time T , (2) using this to purchase e-dT units of the index and reinvesting all dividends back into the index, and (3) selling a futures contract that matures at time T . If interest rates are constant, the futures contract is equivalent to a forward contract, which simplifies the analysis. In particular, the (equivalent) cash flows associated with this strategy are tabulated in the following table. Note that reinvestment of the dividends has resulted in the initial investment of e-dT units of the index growing at a rate of d to amount to one unit by maturity. Two Examples given below illustrate how to execute a riskless arbitrage if this equality does not hold. Arbitrage Relationship Between Spot and Futures Contract
Position Borrow Time 0 e-dTS0 0 0 Time T erT e-d TS0 ST F - ST F - S0e( r- dT)

Time: 1 Number of Contracts Purchased 0 Cash Flow from Contract 0 Investment in Bonds 0 Net Cash Flow 0 Time: t Number of Contracts Purchased 0 Cash Flow from Contract 0 Investment in Bonds 0 Net Cash Flow 0 Time: Cash Flow from Payoff from Bonds Net Cash Flow

FU 1-FU0 -(FU1-FU0 ) 0

er

Buy e-dT units of index

Sell one Futures Contract Net Position

Do cu

er t FU t-FU t-1 -(FUt-FU t- 1) 0

er(t- 1)

T Contract erT (ST FO 0erT STer T

FO0) erT(S T FU 0) erT[FU0 + (FU 1 - F U0) + ... + (FUT-1 - F UT- 2)] S TerT

Once again, since this strategy requires no initial cash outlay, the cash flow at maturity must also be zero or an arbitrage opportunity exists. In particular, if F > S 0 e(r-dT) the strategy of buying the index and selling the futures generates an arbitrage profit. Conversely, if F < S 0 e(r-dT) the strategy of selling the index and buying the futures generates an arbitrage profit. Two Examples given below illustrate how to execute a riskless arbitrage if this equality does not hold. Example: Futures arbitrage: Buy index - Sell futures Suppose the S&P 500 stock index is at $295 and the six-month futures contract on that index is at $300. If the prevailing T-Bill rate is 7% and the dividend rate is 5%, an arbitrage opportunity exists because F=300 > S e(r-d)T = 297.96. The arbitrage can be executed by buying low and selling high. In this case, the futures contract is relatively overvalued, so we sell the futures and buy the index. In particular, the strategy is to

Arbitrage Relationships For the remainder of this module, we assume that interest rates are indeed constant over the period of the contract and hence the futures price equals the forward price. That is, we can consider the price and payoffs of a futures contract to be identical to those of a forward contract. This simplifies things because a forward contract has only a single payoff at maturity. Consider, for example, the valuation of a futures contract on the S&P 500 stock index. This contract, which trades on the Chicago Mercantile Exchange (CME) entitles the buyer to receive the cash value of the S&P 500 stock index at the end of the contract period. There are always four contracts in effect at any one time expiring in March, June, September, and December. In

Borrow e-dTS0 = $287.72 at 7% repayable in 6 months.

11D.571.3

Copy Right: Rai University

al
-e-dTS0

d The dividend yield on the index (continuously compounded annual rate)

107

Use this $287.72 to buy e-dT = 0.975 units of the S&P index, and reinvest all dividends in the index. Sell a futures contract for delivery of the index in six months. This generates the following cash flows:
Position Borrow Buy e - dT units of index Sell one futures contract Net Position Time 0 287.72 -287.72 0 0 Time T -297.96 ST 300 - S T 2.04

may be located in Europe, the Carribean, Asia, or South America. US banks can take deposits on an unregulated basis through their international banking facilities. LIBOR is the rate at which major money center banks are willing to place Eurodollar time deposits at other major money center banks. Corporations usually borrow at a spread above LIBOR since a corporations credit risk is greater than that of a major money center bank. By convention, LIBOR is quoted as an annualized rate based on an actual/360-day year (i.e., interest is paid for each day at the annual rate/360). Example given below demonstrates how interest is calculated on a LIBOR loan according to the conventions.

In particular, the strategy is to

Short sell e-dT units of the S&P index generating Se-dT = 292.59.

Lend the $292.59 proceeds of the short sale at 7% repayable in 6 months. Buy a futures contract for delivery of the index in six months. This generates the following cash flows:
Position Sell e -dT units of index Lend Buy one futures contract Net Position

Hence this strategy generates an arbitrage profit of $3.02 six months from now. Hedging with Futures In this section, we examine how three common business risks interest rate risk, stock market risk, and foreign exchange risk can be hedged in a practical setting. In each case, we describe the nature of the risk and illustrate, through a series of practical examples, how the risk can be managed. Hedging Interest Rate Risk There are two primary interest rate futures contracts that trade on US exchanges. The Eurodollar Futures Contract trades on the Chicago Mercantile Exchange and the US T-Bill Futures Contract trades on the Chicago Board of Trade.

Do cu
108

The Eurodollar contract is the more successful and heavily traded contract. At any point in time, the notional loan amount underlying outstanding Eurodollar futures contracts is in excess of $4 trillion. This contract is based on LIBOR (London Interbank Offer Rate), which is an interest rate payable on Eurodollar Time Deposits. This rate is the benchmark for many US borrowers and lenders. For example, a corporate borrower may be quoted a rate of LIBOR+200 basis points on a short-term loan. Eurodollar time deposits are non-negotiable, fixed rate US dollar deposits in banks that are not subject to US banking regulations. These banks

ww Co w.p m dfw P iza D rd. F com Tr i


Time 0 292.59 Time T -S T -292.59 0 0 303.02 ST -300 3.02

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Hence this strategy generates an arbitrage profit of $2.04 six months from now. Example: Futures arbitrage: Sell index - Buy futures Suppose the S&P 500 stock index is at $300 and the six-month futures contract on that index is at $300. If the prevailing T-Bill rate is 7% and the dividend rate is 5%, an arbitrage opportunity exists because F = 300 < S e(r-d)T = 303.02. The arbitrage can be executed by buying low and selling high. In this case, the futures contract is relatively undervalued, so we buy the futures and sell the index.

The Eurodollar futures contract is based on a 3-month $1 million Eurodollar time deposit. It is cash settled, so no actual delivery of the time deposit occurs when the contract expires. Delivery months are March, June, September, and December. The minimum price move is $25 per contract which is equivalent to 1 basis point: (.0001/4)1,000,000=25. The futures price at expiration (time T ) is determined as FT = 100-LIBOR. Prior to expiration, the futures price implies the interest rate that can be effectively locked in for a 3-month loan that begins on the day the contract matures. Settlement of the Eurodollar futures contract is illustrated in Example given below. Example: Settlement of Eurodollar Futures Contract. Suppose you purchased 1 December Eurodollar futures contract on November 15 when the price was 94.86. If interest rates fall 100 basis points between November 15 and expiration of the futures contract in December, what is your total gain or loss on the contract at settlement? First note that no money changes hands at the time you buy the contract. This is the nature of all futures contracts. The November 15 price of 94.86 implies that the LIBOR rate of interest was 10094.86=5.14% at that time. If LIBOR falls 100 basis points by the time the December contract expires, LIBOR will then be 4.14%. Therefore, the expiration futures price will be 100-4.14=95.86. The total gain is therefore: 0.25 (1,000,000)(Ft-F0) = 0.25 (1,000,000)(0.9586-0.9486) = $2,500 That is, to settle the contract, your counter party will give you $2,500. Example given below contains a detailed illustration of how the Eurodollar futures contract can be used to hedge interest rate risk. Example: Hedging with the Eurodollar Futures Contract. It is currently November 15 and your company is aware that it needs to borrow $1 million on December 16 to pay a liability, which falls due on that day. The loan can be repaid on March 16 when an account receivable will be collected. The current LIBOR rate is 5.14%. Your company is concerned that interest rates will rise between now and December 16, in which case you will pay a higher rate of interest on your loan. How can your company lock in the current rate of 5.14%?

Copy Right: Rai University

al

Example 4.22: LIBOR Conventions. If 3-month (90 actual days) LIBOR is quoted as 8%, the interest payable on a $1 million loan at the end of the 3- month borrowing period is (.08)(90/360) $1,000,000 = ((.08)/4) $1,000,000 = $20,000

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Your company stands to lose if interest rates increase. Therefore, you want enter a futures position that increases in value if interest rates rise. Then, if interest rates rise, your company loses by paying higher interest charges on the loan, but your company gains by profiting on the futures position. Conversely, if interest rates fall, your company gains by paying lower interest charges on the loan, but your company loses on the futures position. Ideally, the loss and the gain would exactly cancel, whether interest rates rise or fall. From the construction of the Eurodollar futures contract, we know that if the interest rate rises, the futures price will fall. Therefore, you will sell 1 December Eurodollar futures contract at 94.86. Underlying this contract is a notional 3-month $1 million dollar loan to be entered into on December 16 (the day the contract expires). If we could lock in the rate of 5.14%, the total interest on the loan would be 0.0514($1 million)/4 = $12,850. First, suppose that on December, 16 LIBOR is 6.14%. Interest on the loan will be 0.0614($1 million)/4 = $15,350, and the gain on the futures position will be -10000(93.86-94.86)/4 = $2,500. This yields a net cash outflow of -$15,350+$2,500 = -$12,850, which is the same as 3-months interest on $1 million at 5.14%. Now suppose that on December, 16 LIBOR is 4.14%. Interest on the loan will be 0.0414($1 million)/4 = $10,350, and the gain on the futures position will be -10000(95.86-94.86)/4 = -$2,500. This yields a net cash outflow of -$10,350-$2,500 = -$12,850, which is the same as 3-months interest on $1 million at 5.14%. Hedging Market Risk Another source of risk that an individual or organization may wish to hedge is stock market risk. For example, a person nearing retirement may wish to hedge the value of the equities component of his retirement fund against a stock market crash before he retires. A fund manager, who believes he can pick winners among individual stocks, may wish to hedge marketwide movements. The dominant stock market index futures contract is the S&P 500 futures contract. This contract trades on the Chicago Mercantile Exchange and has delivery months March, June, September, and December. The underlying quantity is $500 times the level of the S&P 500 index. The minimum price move is 0.05 index points, which is $25 per contract. Example given below illustrates the settlement mechanics for the S&P 500 contract. Example: Settlement of the S&P 500 Futures Contract.

at 382.62. The December S&P 500 futures price is currently 383.50. The managers fund was valued at $76.7 million at the beginning of the year. Since the fund has already generated a handsome return for the year, the manager wishes to lock in its current value. That is, he is willing to give up potential increases in order to ensure that the value of the fund does not decrease. How does he lock in the current value of the fund? First note that at the December futures price of 383.50, the return on the index, since the beginning of the year, is 383.5/306.80-1 = 25%. If the manager is able to lock in this return on his fund, the value of the fund will be 1.25($76.7 million) = $95.875 million. Since the notional amount underlying an S&P 500 futures contract is 500(383.50) = $191,750, the manager can lock in the 25% return by selling 95,875,000/191,750=500 contracts. To illustrate that this position does indeed form a perfect hedge, we examine the net value of the hedged position under two scenarios. First, suppose the value of the S&P 500 index is 303.50 at the end of December. In this case, the value of the fund will be (303.50/ 383.50)95.875 million = 75.875 million. The gain on the futures position will be -500(500)(303.50-383.50) = 20 million. Hence the total value of the hedged position is 75.875+ 20 = 95.875 million, locking in a 25% return for the year. Now suppose that the value of the S&P 500 index is 403.50 at the end of December. In this case, the value of the fund will be 403.50/383.50(95.875 million) = 100.875 million. The gain on the futures position will be -$500(500)(403.50-383.50) = -5 million. Hence the total value of the hedged position is 100.875-5=95.875 million, again locking in a 25% return for the year. Hedging Foreign Exchange Risk Another source of risk that an individual or organization may wish to hedge is foreign exchange risk. For example, a person who will be traveling overseas in the coming months may wish to hedge the value of the amount of money he intends to spend abroad against a devaluation of his domestic currency relative to the foreign currency. An exporter who sells goods overseas on credit may wish to hedge against a devaluation of the foreign currency in which payment occurs. A number of foreign currency futures contracts trade on the International Monetary Market division of the Chicago Mercantile Exchange. The currencies on which contracts are based, and the underlying notional amount are listed in the following Table. Delivery months for all contracts are March, June, September, and December. Prices are quoted as US dollars per unit of foreign currency. For example, if one Swiss franc buys 69.15 US cents, the price will be quoted as 0.6915. Denomination of Foreign Currency Futures Contracts
Currency British Pound Canadian Dollar German Mark Japanese Yen Swiss Franc French Franc Australian Dollar Underlying Amounts 62,500 L 100,000 C$ 125,000 DM 12,500,000 Y 125,000 SF 250,000 FF 125,000 A$

Do cu
11D.571.3

It is currently November 15 and the S&P 500 index is at 382.62. The December S&P 500 futures price is 383.50. If you buy 1 December S&P 500 futures contract, how much will you gain if the futures price at expiration is $393.50? The gain on your futures position is $500(Ft-F0) = $500(393.50383.50)=$5,000. That is, to settle the contract, your counter party will give you $5,000. Example given below contains a detailed illustration of how the S&P 500 futures contract can be used to hedge stock market risk. Example: Hedging with the S&P 500 Futures Contract. A portfolio manager holds a portfolio that mimics the S&P 500 index. The S&P 500 index started the year at 306.8 and is currently

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

109

Example given below contains a detailed illustration of hedging exchange risk. Example: Hedging with the Swiss Franc Futures Contract. Your company sells 10 machines to a Swiss company. The sale price is 100,000 Swiss Francs each and payment is to be made at the end of the calendar year. The December futures price for Swiss Francs is 0.6915. You are worried that the Swiss Franc will depreciate against the US Dollar between now and the end of the year. How can you hedge this exchange rate risk? Note that since (1) the total exposure is one million Swiss Francs and (2) each futures contract is for 125, 000 Francs, eight contracts are required to hedge the exposure. Further, since (1) the company stands to lose if the Swiss Franc depreciates (each Swiss Franc can be converted back into a smaller number of Dollars) and (2) the futures contracts decrease in value if the Swiss Franc depreciates (since the basis of the contract is Swiss Francs per Dollar), the contracts should be sold.

To illustrate that selling eight futures contracts provides an adequate hedge, first suppose that the value of the Swiss Franc is 0.30 at the end of December. In this case, the US Dollar value of the payment for the machines will be 0.30(10)(100,000) = $300,000. The gain on the futures position will be 8(125,000)(0.30-0.6915) = $391,500. Hence the total income is $691,500, which equals the unhedged income in dollars if the exchange rate does not fluctuate. Basis Risk There is no such thing as a perfect hedge. You can never completely eliminate a cash positions risk. Consider a holder of Q Treasury bonds maturing in 2004 with a coupon rate of 8%. Assume that the holder of bonds believes that bond prices are going to fall. To hedge his risk, the person shorts an equivalent amount of futures contracts for Treasury bonds. At a later date, the person will close out both its bond and futures positions. At the close, the firm will receive BT per bond sold in the regular spot or cash market. The futures price is F0 at the time the futures are sold short, and its price at the closeout is FT. Prior to the closeout, both BT and FT are uncertain, although F0 is known. The usual computation of the funds that the person will have at closeout is: Net Revenue(bond sale plus futures) = Q[ BT + (FT - F0)] =QF0 + Q[ BT - FT]

Do cu
110

From the above equation, the net revenue from the hedge position is composed of (1) a certain component that depends upon the futures price at the time of the hedge (F0) and (2) an uncertain component that depends upon the difference between the price received for bonds in the spot market and the futures price at closeout (BT-FT). The difference between the spot and the futures price is called the basis. Thus, uncertainty about the net-hedged revenue arises if there is uncertainty about the basis. To quote Holbrook Working, hedging is speculation in the basis. There are many reasons for the basis to be uncertain. First, the good or instrument being hedged may be different from the good or instrument for which there is a futures contract. This would be the case if a corporate bond offering is hedged

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

with Treasury bond futures; basis risk arises due to the uncertainty of the yield differential at the time the hedge is lifted. Second, in commodity futures, there is basis risk due to locational differentials. For example, a cattle farmer in Texas who hedges with a cattle futures contract that calls for delivery in Omaha has the uncertainty of the closeout differential between the Texas steer price and the Omaha steer price. This is called locational basis risk. This is usually an important factor in agricultural contracts. The risk is compounded by the fact that the seller usually has the option of where delivery is made. The third type of basis risk arises because the seller of the futures contract often has the option to choose the quality of the goods or financial instrument delivered. For example, the Treasury bond futures market calls for delivery of any U.S. Treasury bond that is not callable within 15 years. Since there are many instruments that are candidates for delivery, the hedge has the risk of fluctuations in the yield spread between the instrument hedged and the instrument ultimately delivered. Fourthly, with most futures contracts, the seller has the choice of the date of delivery within the delivery month. This choice is an uncertain value and thus contributes to basis risk. Finally, the mark to market aspect of futures results in hedging risk. The uncertainty is about the amount of interest earned or forfeited due to the daily transfers of profits and losses. In fact, the equations for net revenue are not exactly right due to the omission of interest earned (lost) on futures profits (losses). The Volatility of Futures A common mistake made is to assume that futures are much more volatile than stocks. Percentage changes of futures prices are generally less volatile than the percentage changes of a typical stock. Annualized standard deviations for most futures contracts are in the 15-20% range whereas a typical stocks is about 30%. There is no reason that the futures should be played in a high-risk manner by a large investor. Of course, if the futures investor does not have enough capital (5-8 times margin), then he is required to play with considerable leverage or not at all. Before taking great leverage, the small investor should consider looking at a smaller contract (grain on CBT is 5,000 bushels whereas Mid-America contract is 1,000 bushels). The effect of leverage is to increase volatility. Borrowing to meet the margin requirements will increase gains but also increase losses. Setting aside larger amounts of capital, which are invested, in a safe asset will decrease the volatility. Risk in the Futures Markets As we have already seen, one the most important applications of the futures is for hedging. Futures contracts were initially introduced to help farmers that did not want to bear the risk of price fluctuations. The farmer could short hedge in March (agree to sell his crop) for a September delivery. This effectively locks in the price that the farmer receives. On the other side, a cereal company may want to guarantee in March the price that it will pay for grain in September. The cereal company will enter into a long hedge.

al

11D.571.3

The second important insight had to do with hedging with futures contracts. The concept of basis risk was introduced. It is extremely unlikely that you can create a perfect hedge. A perfect hedge is when the loss on your cash position is exactly offset by the gain in the futures position. We suggested some reasons why it is unlikely that we can construct a perfect hedge.

The most obvious case is when you are trying to hedge a cash position with futures positions in different instruments. This is the case that we introduced in one of the first lectures when we hold the Ginnie Mae security and want to hedge this security with a combination of T-Bonds and Euros. It is unlikely, however, that at the expiration of the futures contract, the cash price of the T-Bond and Euros will equal the Ginnie Mae. This is the basis risk.

Do cu

A second type of basis risk arises out of the quality option. We discussed this in terms of food and financial instruments. If you are a farmer and want to lock in the price for your crop of wheat, you may use a futures contract that may call for delivery of a number of different types of wheat. Similarly, in the T-Bond and T-Note contracts, there are whole ranges of instruments that are available for delivery. This difference will induce basis risk. Third, there is a timing option. The futures contract is different from an options contract. Most futures call for delivery within the contract month. It is unclear when the short will deliver the goods. This uncertainty leads to basis risk.

The fourth type of risk is locational basis risk. This is mainly applicable to agricultural commodities. There could be a difference the cash price of the good that you are selling (cattle) and the futures price at a different location. The last type of uncertainty is linked to the uncertain interest rate flows from the money you make in excess of the margin. Summary of Important Formulas F = S 0 e(r+q)T The price of a forward contract when there is a cost of carry q. When interest rates are constant, the same relationship holds for a futures contract.

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 111

There are a number of important insights that should be reviewed. The first is that we should be careful about what we consider the investment in a futures contract. It is unlikely that the margin is the investment for most traders. It is rare that somebody plays the futures with a total equity equal to the margin. It is more common to invest some of your capital in a money market fund and draw money out of that account as you need it for margin and add to that account as you gain on the futures contract. It is also uncommon to put the full value of the underlying contract in the money market fund. It is more likely that the futures investor will put a portion of the value of the futures contract into a money market fund. The ratio of the value of the underlying contract to the equity invested in the money market fund is known as the leverage. The leverage is a key determinant of both the return on investment and on the volatility of the investment. The higher the leverage the more volatile are the returns on your portfolio of money market funds and futures. The most extreme leverage is to include no money in the money market fund only commit your margin.

F = S0 e(r-d)T The price of a forward contract when there is a dividend benefit d. When interest rates are constant, the same relationship holds for a futures contract. Notes:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

To illustrate the usefulness of the building block approach, suppose that NeedOil decides that it wants protection from high oil prices, but that it does not believe oil prices will rise above $18 a barrel. If NeedOil hedged by buying a call option with an exercise price of $15 (as was described earlier), it would be buying protection against any increase in oil prices -

Do cu
112

- above $15, including protection against oil prices above $18. Since it does not believe oil prices will rise above $18, it is buying protection that it deems as having little or no value. NeedOil therefore would like to have a derivative contract with a payoff that increases with prices between $15 and $18, but that does not increase when oil prices are above $18. The solid line in Figure given below illustrates the payoff NeedOil wants (ignoring the cost of obtain-ing protection).

NeedOil can obtain its desired payoff by buying a call option with an exercise price of $15 and selling a call option with an exercise price of$18. To see this, you simply need to graph the payoff on each option separately and then vertically add the payoffs. Figure given below illustrates the payoffs from the two options with dashed lines.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 21: OTHER DERIVATIVES CONTRACTS


Constructing Other Derivatives
We have introduced call options, put options, and forward contracts. Futures contracts are essentially the same as forward contracts at this introductory level of analysis. Call and put options give asymmetric payoffs and forward and futures contracts give symmetric payoffs. While there are many other types of derivative contracts, they generally can be constructed from the basic contracts we have already described. For this reason, many practitioners and academics find it useful to view options and forwards as building blocks that can be used to construct other derivative contracts. The building block approach starts with the basic payoffs summarized in Figure given below.

A huge variety of other types of derivative securities exist for a number of purposes, including hedging, speculating and arbitrage. One important type of derivative, a swap contract, provides for the exchange of one set of cash flows for another set of cash flows. The amounts of these cash flows are usually tied to cash flows associated with other assets or portfolios. Swap contracts are specified for commodities, currencies, debt and equity securities, interest rates and a large number of other types of assets as well. These swap contracts have a number of uses. For instance, swap contracts enable financial market participants to synthesize other securities, which are either unavailable or inappropriately priced. For example, Japanese regulations have restricted investment in many types of securities; in particular, Japanese institutions have been restricted with respect to non-yen bond purchases. Suppose that a firm wished to borrow dollars to purchase American products. Japanese tax code often makes borrowing less expensive in Japan. The borrower could sell to a Japanese institution a yen denominated bond (resulting in an attractive interest rate due to preferential tax treatment of Japanese zero coupon notes) then execute a dollar/yen currency swap such that its initial loan receipts and loan repayments are denominated in dollars. Thus, all of the borrowers net cash flows are denominated in dollars (it has synthesized a dollar loan) and the Japanese institution fulfills regulatory requirements by issuing a yen denominated note. Swap Contracts The final type of derivative contract that we will highlight is called a swap contract. Swap contracts have payoffs like a series of forward contracts. That is, instead of having just one payoff at the contracts expiration (or when the option is exercised), a swap contract has a series of payoffs over time. Each payoff depends on the difference between the market price of the underlying asset and a predetermined price, called the swap price.

al
11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

SWAPCO 0.5 percent times $1 mil-lion, or $5,000. Even though the notional principal is $1 million, the likely payments are only a fraction of the notional principal. Thus, for interest rate swaps, the notional princi-pal greatly overstates the amount of money at risk. Equity Swaps An equity swap is a contract providing for the delivery of cash flows associated with shares of equity (or an equity index) in exchange for the cash flows associated with another asset (such as a debt or index instrument). For example, an investor wishing to relieve himself of risk associated with shares he is currently holding, without selling and exposing himself to capital gains tax liability, may agree to deliver to another investor the cash flows (dividends and capital gains for a specified period) associated with his shares. The second investor, in turn, agrees to deliver cash flows associated with a treasury bond to the stock investor. Equity swaps are used to exploit apparent mis-pricing in equity markets, to manage risks associated with domestic or foreign equity investment, to circumvent dividend withholding tax requirements in foreign countries and to speculate in foreign equity markets when direct ownership is not permitted. Equity swaps permit investors to reduce their risk in an equity investment without actually selling shares. One type of participant in this market have been corporate managers. Corporate managers have used the executive equity swap to reduce their personal exposure in the shares of their employers stock. In a well-publicized case involving Autotote Company, at the time, a NASDAQ listed manufacturer of wagering equipment, the CEO Lorne Weil arranged to deliver dividends and any capital gains (which would be negative in the event of a capital loss) associated with Autotote stock in exchange for certain cash flows associated with treasury securities. Thus, technically, the CEO did not sell his shares, though he divested himself of any of the return risk associated with share ownership. By engaging this equity swap, the CEO has reduced his risk in the employing company without having to report a sale of shares (though, Weil did voluntarily report this transaction, and the SEC currently requires reporting). This means that the CEO is not subject to capital gains taxes at the time of the transaction.1 The CEO is not likely to bear the selling price consequences associated with an insider sell transaction. Furthermore, the CEO maintains his level of voting control in the companys shares. Thus, in a sense, the equity swap permits the CEO the opportunity to, in effect, execute a sale of shares without bearing most of the undesirable consequences associated with the sale. Exotic Options An Asian Option (average rate) is based on the average price (or exchange rate) of the underlying asset (or currency). For example, an Asian call on currency permits its owner to receive the difference between the average currency exchange rate over the life of the option (AT) and the exercise price (E) associated with the option: A potential user of the Asian option might be an exporter who sells to a particular country the same number of units of its product each day. Since the exchange rate will vary daily, the revenues
113

The term swap is used because these transactions allow parties to reduce risk by swap-ping payments. Without hedging, NeedOils payments for oil every six months would be un-certain; the payment would equal 250,000 times the price of oil at that time (Pt). By transacting with SWAPCO, NeedOil swaps its uncertain payment for oil for a certain oil payment. Specifically, SWAPCO gives NeedOil the funds needed to make its uncertain oil payment (250,000 times Pt) and NeedOil gives SWAPCO $15 times 250,000. By swapping its uncertain payments for certain payments, NeedOil reduces its risk.

In this example, the difference between the price of oil at a given date and $ 15 is always multiplied by 250000. This is a common feature of swap contracts (and many other derivative contracts) the difference between two prices is multiplied by some number, called the notional principal (in this case 250,000), to determine the dollar payoff.

Do cu
11D.571.3

While notional principal often is used to measure the value of outstanding swap con-tracts, notional principal usually is a flawed measure of how much money the parties could potentially gain or lose because potential swap payments depend on the units used for quot-ing prices and the volatility of prices, as well as the notional principal. For example, if oil prices could vary between $13 and $17 during the time period covered by NeedOils swap contract, then the payments made by NeedOil could vary between -$500,000 and $500,000. In this particular case, the notional principal understates the potential gain or loss in any given sixmonth period. For other types of swaps, like interest rate swaps, the notional principal greatly overstates the amount of money at risk. Table 24.5 gives an example of an interest rate swap. Here, the notional principal is $1 million and SWAPCO pays NeedOil the prevailing one-year T-bill rate minus 5 percent. For example, if the one-year T-bill rate in 12 months equals 6 percent, then SWAPCO pays NeedOi11 percent times $1 million, or $10,000. If the one-year Tbill rate in two years equals 4.5 percent, then NeedOil pays

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

To illustrate a swap contract, we will again use the example of Need Oil. In this example, NeedOil plans to purchase oil every six months for the next two years and it wants protec-tion against high oil prices at each date. It therefore purchases a swap contract from SWAPCO with the payoffs described in Table given below. Notice that at each date, the payoff to NeedOil is just like the payoff from buying a forward contract. Thus, swap contracts can be viewed as a series of forward contracts.

al

received by the exporter will vary. The Asian option enables the exporter to stabilize its cash flows without entering the derivatives market on a daily basis. The cash flow structures of these options vary from contract to contract. For example, some contracts call for the payoff to be related to the difference between the time T spot rate and the average exchange rate realized during the life of the option. A Lookback Option enables its owner to purchase (or sell in the case of a put) the underlying security at the lowest price (or highest price in the case of a put) realized over the life of the option. A Barrier Option is similar to a plain vanilla option except that it expires or is activated (in the case of a down-and-out option, or, in the case of down-and-in options can only be activated) once the underlying asset value reaches a pre-specified price. These are often referred to as either knock out or knock in options. A Compound Option is simply an option on an option. Rainbow Options are written on two or more assets.

either by engaging in an offsetting transaction with someone else in the OTC market or by using exchange-traded options and futures contracts. In addition to the ability to tailor contracts, there are other differences between the OTC and exchange markets. One difference is liquidity, the ability to buy or sell without making a large price concession. When the OTC market creates a contract that is tailored to one par-ticipants needs, this contract tends to be illiquid. In contrast, if the contract were a stan-dardized exchange-traded futures contract, there would likely be more liquidity. The greater liquidity arises in part because the standardized exchange contracts attract many traders. The greater liquidity also is due in part to the method of ensuring that the parties who trade derivatives uphold their agreements. OTC contracts are bilateral contracts. That is, a buyer and seller are specified on the contract and if one party cannot fulfill its part of the contract, the other party becomes a creditor. As a result, when trading OTC contracts, firms assess the default risk (or credit risk) of the parties with whom they transact. In addition, if a firm wishes to reverse its position, the firm must negotiate with the specific counter party to the contract. These features make OTC contracts less liquid. Default risk is handled differently with exchange-traded contracts. When taking a fu-tures position, a trader must post a performance b o n d ,c a l l e da margin. The bond equals some percentage of the value of the contracts and must be posted either in the form of cash, letters of credit, or government bonds. The purpose of the bond is to ensure the solvency of the trader over the corning day of trading. Thus, at the end of each day, the margin account is monitored to see if there are sufficient funds to ensure solvency over the subsequent day. As an example, suppose that the required margin is always 20 percent of the value of the contract and that Ms. Weiss takes a long position in (buys) one contract when the futures price equals $1,000. Then, Ms. Weiss must post margin equal to $200. Now suppose that over the course of the following day, the futures price falls to $900. Ms. Weiss has lost $100 ($1,000 - $900), which is subtracted from her margin account, leaving only $100. Since Ms. Weisss position now is worth $900, she needs to (have margin equal to $180 (20% of $900). Consequently, Ms. Weiss must add $80 to the margin account. If she does not add this amount, then her position will be closed; that is, she will have to take an offsetting short position in (sell) one contract. The other important difference between OTC markets and exchange markets is that ex-changes have a clearinghouse that acts as an intermediary in every transaction. As stated above, with an OTC contract, the buyer knows the identity of the seller. With exchange-traded contracts, a buyer is not matched with a particular seller. Instead, each transaction is with the clearinghouse. The number of contracts purchased by the clearinghouse must always equal the number that it has sold, but buyers and sellers are not explicitly matched. Thus, if a trader wants to reverse a position (sell the derivative the trader had previously pur-chased or buy the derivative contract the trader had previously sold), a specific counter party does not have to be notified. Any counter party willing to take the other side of the transac-tion may be used. The

A Zero Cost Collar is a package of options designed to require zero net investment. For example, the Range Forward Contract enables (and obliges) its owner to purchase the underlying security with a time T value for the following price:

An Interest Rate Cap pays its owner a value based on the difference between the market rate and the cap strike rate if the market rate rises above the strike rate. A Swaption gives its owner the right (but not the obligation) to enter into a swap arrangement at a later date. Markets for Derivatives Over-the-counter versus Exchange-traded Derivatives

Do cu
114

Earlier we mentioned that the payoffs from forward and futures contracts are similar. One difference in the two contracts is that forward contracts are traded in the over-the-counter (OTC) market and futures contracts are traded at exchanges like the Chicago Board of Trade. Call and put option contracts trade on exchanges as well as in the over-the-counter market. Swap contracts are traded overthe-counter.

An over-the-counter (OTC) derivative contract resembles a privately negotiated contract between two firms. For example, if NeedOil wanted to purchase an option contract to hedge its oil price risk, NeedOil could contact a financial institution in the OTC market, which could then tailor a contract to NeedOils hedging needs. Exchange-traded derivatives are stan-dardized contracts with the terms established by the exchanges. Since specific details are not subject to negotiation, contracting costs tend to be lower with exchange-traded derivatives than with OTC derivatives. While exchanges try to create standardized contracts that appeal to many participants, the standardization often implies that exchange-traded derivatives have greater basis risk than OTC derivatives. Initially, financial institutions operating in the OTC market acted as brokers who would identify another firm that would transact with a party such as NeedOil. Today, fi-nancial institutions operate more like dealers, taking positions directly with each firm. Thus, NeedOil could buy the option directly from the financial institution. Having sold a call option, the dealer would be exposed to oil price risk and thus the dealer probably would try to hedge this risk

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

clearinghouse, along with the daily settlement and margin system for ensuring performance, helps create a liquid market. Common Risks That Are Hedged with Derivatives Although OTC contracts can be tailored to meet the specific hedging needs of individual firms, the types of risk that are most often hedged with derivatives are: (1) foreign exchange rates, (2) interest rates, (3) commodity prices, and (4) equity prices.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Foreign Exchange Derivatives


With the increasing amount of trade among foreign countries and the increased volatility in exchange rates due to the breakdown in 1973 of the previous system of fixed foreign exchange rates, firms have become more interested in hedging against changes in foreign ex-change rates. Most multinational companies utilize derivatives to manage their foreign exchange exposures. The most commonly used currency derivatives are swap and forward contracts, which had notional principal of over $1.46 trillion in 2002.

Interest Rate Derivatives Several factors have contributed to the use of interest rate derivatives to hedge against changes in value due to interest rate changes. One factor is the high level and volatility of interest rates in the 1970s and 1980s, which resulted from high levels of expected inflation as well as changes in expected inflation. Also, in 1979 the Federal Reserve changed its pol-icy of trying to stabilize interest rates directly and instead started targeting monetary ag-gregates. The consequence of this change in policy was to increase interest rate volatility substantially. Interest rate futures, options, and swaps are frequently used to hedge interest rate risk. The notional principal in 2002 of interest rate derivatives was close to $90 trillion.

Do cu
Notes:
11D.571.3

Commodity Derivatives Derivative contracts on agricultural commodities have existed for a long time. For example, the Chicago Board of Trade has traded futures contracts since 1865, and forwards and op-tions on agricultural products date back several centuries. Users and producers of com-modities such as metals and oil also frequently trade both OTC and exchange-traded derivatives. The use of electricity derivatives also has grown significantly in recent years due in part to deregulation of the industry. Equity Derivatives Equity derivatives are contracts derived from stock market indexes like the Standard & Poors 500. Futures contracts exist that are based on US stock market indexes and on for-eign stock market indexes, such as the Nikkei index for the Japanese stock market. In ad-dition, options have traded on individual stocks for some time. The notional principal on futures and options in 2002 equaled about $2.2 trillion.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 115

al

LESSON 22: MEASURING & HEDGING INTEREST RATE RISK IN BANKS


Chapter Objective
Measuring Interest Rate risk in Banks Hedging Interest Rate Risk in Banks Measuring credit Risk Hedging with Credit Derivatives Integrating Market & Credit Risk Management

UNIT I CHAPTER 8 CREDIT & INTEREST RATE RISKS


the nature and mix of its products and activities. Management should understand the banks business mix and the risk characteristics of these businesses before it attempts to identify the major sources of the banks interest rate risk exposure and the relative contribution of each source to the banks overall interest rate risk profile. Various risk measurement systems can then be evaluated by how well they identify and quantify the banks major sources of risk exposure. Re-pricing or Maturity Mismatch Risk The interest rate risk exposure of banks can be broken down into four broad categories: re-pricing or maturity mismatch risk, basis risk, yield curve risk, and option risk. Re-pricing risk results from differences in the timing of rate changes and the timing of cash flows that occur in the pricing and maturity of a banks assets, liabilities, and off-balance-sheet instruments. Repricing risk is often the most apparent source of interest rate risk for a bank and is often gauged by comparing the volume of a banks assets that mature or re-price within a given time period with the volume of liabilities that do so. Some banks intentionally take re-pricing risk in their balance sheet structure in an attempt to improve earnings. Because the yield curve is generally upward sloping (long-term rates are higher than short-term rates), banks can often earn a positive spread by funding long-term assets with short-term liabilities. The earnings of such banks, however, are vulnerable to an increase in interest rates that raises its cost of funds. Banks whose re-pricing asset maturities are longer than their repricing liability maturities are said to be liability sensitive, because their liabilities will re-price more quickly. The earnings of a liabilitysensitive bank generally increase when interest rates fall and decrease when they rise. Conversely, an asset sensitive bank (asset re-pricings shorter than liability re-pricings) will generally benefit from a rise in rates and be hurt by a fall in rates. Re-pricing risk is often, but not always, reflected in a banks current earnings performance. A bank may be creating re-pricing imbalances that will not be manifested in earnings until sometime into the future. A bank that focuses only on short-term re-pricing imbalances may be induced to take on increased interest rate risk by extending maturities to improve yield. When evaluating repricing risk, therefore, it is essential that the bank consider not only near term imbalances but also long-term ones. Failure to measure and manage material long-term re-pricing imbalances can leave a banks future earnings significantly exposed to interest rate movements. Basis Risk Basis risk arises from a shift in the relationship of the rates in different financial markets or on different financial instruments. Basis risk occurs when market rates for different financial instruments, or the indices used to price assets and liabilities, change at different times or by different amounts. For example,

The movement of interest rates affects a banks reported earnings and book capital by changing
Net interest income, The market value of trading accounts (and other instruments

accounted for by market value), and servicing fees.

Other interest sensitive income and expenses, such as mortgage

Do cu
116

Changes in interest rates also affect a banks underlying economic value. The value of a banks assets, liabilities, and interest-raterelated, off-balance-sheet contracts is affected by a change in rates because the present value of future cash flows, and in some cases the cash flows themselves, is changed. In banks that manage trading activities separately, the exposure of earnings and capital to those activities because of changes in market factors is referred to as price risk. Price risk is the risk to earnings or capital arising from changes in the value of portfolios of financial instruments. This risk arises from market making, dealing, and position-taking activities for interest rate, foreign exchange, equity, and commodity markets. The same fundamental principles of risk management apply to both interest rate risk and price risk. Risk Identification The systems and processes by which a bank identifies and measures risk should be appropriate to the nature and complexity of the banks operations. Such systems must provide adequate, timely, and accurate information if the bank is to identify and control interest rate risk exposures. Interest rate risk may arise from a variety of sources, and measurement systems vary in how thoroughly they capture each type of interest rate exposure. To find the measurement systems that are most appropriate, bank management should first consider

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Introduction Interest rate risk is the risk to earnings or capital arising from movement of interest rates. It arises from differences between the timing of rate changes and the timing of cash flows (re pricing risk); from changing rate relationships among yield curves that affect bank activities (basis risk); from changing rate relationships across the spectrum of maturities (yield curve risk); and from interest-rate-related options embedded in bank products (option risk). The evaluation of interest rate risk must consider the impact of complex, illiquid hedging strategies or products, and also the potential impact on fee income that is sensitive to changes in interest rates.

al

11D.571.3

basis risk occurs when the spread between the three-month Treasury and the three-month London inter-bank offered rate (Libor) changes. This change affects a banks current net interest margin through changes in the earned/paid spreads of instruments that are being re-priced. It also affects the anticipated future cash flows from such instruments, which in turn affects the underlying net economic value of the bank. Basis risk can also be said to include changes in the relationship between managed rates, or rates established by the bank, and external rates. For example, basis risk may arise because of differences in the prime rate and a banks offering rates on various liability products, such as money market deposits and savings accounts. Because consumer deposit rates tend to lag behind increases in market interest rates, many retail banks may see an initial improvement in their net interest margins when rates are rising. As rates stabilize, however, this benefit may be offset by repricing imbalances and unfavorable spreads in other key market interest rate relationships; and deposit rates gradually catch up to the market. (Many bankers view this lagged and asymmetric pricing behavior as a form of option risk. Whether this behavior is categorized as basis or option risk is not important so long as bank management understands the implications that this pricing behavior will have on the banks interest rate risk exposure.)

Certain pricing indices have a built-in lag feature such that the index will respond more slowly to changes in market interest rates. Such lags may either accentuate or moderate the banks shortterm interest rate exposure. One common index with this feature is the 11th District Federal Home Loan Bank Cost of Funds Index (COFI) used in certain adjustable rate residential mortgage products (ARMs). The COFI index, which is based upon the monthly average interest costs of liabilities for thrifts in the 11th District (California, Arizona, and Nevada), is a composite index containing both short and long-term liabilities. Because current market interest rates will not be reflected in the index until the long-term liabilities have been re-priced, the index generally will lag market interest rate movements.

Do cu

A bank that holds COFI ARMs funded with three-month consumer deposits may find that, in a rising rate environment, its liability costs are rising faster than the repricing rate on the ARMs. In a falling rate environment, the COFI lag will tend to work in the banks favor, because the interest received from ARMs adjusts downward more slowly than the banks liabilities. Hedging with Derivative Contracts Some banks use off-balance-sheet derivatives as an alternative to other investments; others use them to manage their earnings or capital exposures. Banks can use off-balance-sheet derivatives to achieve any or all of the following objectives: limit downside earnings exposures, preserve upside earnings potential, increase yield, and minimize income or capital volatility. Although derivatives can be used to hedge interest rate risk, they expose a bank to basis risk because the spread relationship between cash and derivative instruments may change. For example, a bank using interest rate swaps (priced off Libor) to hedge its Treasury note portfolio may face basis risk because the spread between the swap rate and Treasuries may change. A bank using offbalance-sheet instruments such as futures, swaps, and options to hedge or alter the interest rate risk characteristics of on11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

balance-sheet positions needs to consider how the off-balancesheet contracts cash flows may change with changes in interest rates and in relation to the positions being hedged or altered. Derivative strategies designed to hedge or offset the risk in a balance sheet position will typically use derivative contracts whose cash flow characteristics have a strong correlation with the instrument or position being hedged. The bank will also need to consider the relative liquidity and cost of various contracts, selecting the product that offers the best mix of correlation, liquidity, and relative cost. Even if there is a high degree of correlation between the derivative contract and the position being hedged, the bank may be left with residual basis risk because cash and derivative prices do not always move in tandem. Banks holding large derivative portfolios or actively trading derivative contracts should determine whether the potential exposure presents material risk to the banks earnings or capital. Yield Curve Risk Yield-curve risk arises from variations in the movement of interest rates across the maturity spectrum. It involves changes in the relationship between interest rates of different maturities of the same index or market (e.g., a three-month Treasury versus a five-year Treasury). The relationships change when the shape of the yield curve for a given market flattens, steepens, or becomes negatively sloped (inverted) during an interest rate cycle. Yield curve variation can accentuate the risk of a banks position by amplifying the effect of maturity mismatches. Certain types of structured notes can be particularly vulnerable to changes in the shape of the yield curve. For example, the performance of certain types of structured note products, such as dual index notes, is directly linked to basis and yield curve relationships. These bonds have coupon rates that are determined by the difference between market indices, such as the constant maturity Treasury rate (CMT) and Libor. An example would be a coupon whose rate is based on the following formula: coupon equals 10-year CMT plus 300 basis points less three-month Libor. Since the coupon on this bond adjusts as interest rates change, a bank may incorrectly assume that it will always benefit if interest rates increase. If, however, the increase in three month Libor exceeds the increase in the 10-year CMT rate, the coupon on this instrument will fall, even if both Libor and Treasury rates are increasing. Banks holding these types of instruments should evaluate how their performance may vary under different yield curve shapes. Option Risk Option risk arises when a bank or a banks customer has the right (not the obligation) to alter the level and timing of the cash flows of an asset, liability, or off-balance-sheet instrument. An option gives the option holder the right to buy (call option) or sell (put option) a financial instrument at a specified price (strike price) over a specified period of time. For the seller (or writer) of an option, there is an obligation to perform if the option holder exercises the option. The option holders ability to choose whether to exercise the option creates an asymmetry in an options performance. Generally, option holders will exercise their right only when it is to their benefit. As a result, an option holder faces limited downside risk (the premium or amount paid for the option) and unlimited upside reward.
117

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

The option seller faces unlimited downside risk (an option is usually exercised at a disadvantageous time for the option seller) and limited upside reward (if the holder does not exercise the option and the seller retains the premium). Options often result in an asymmetrical risk/reward profile for the bank. If the bank has written (sold) options to its customers, the amount of earnings or capital value that a bank may lose from an unfavorable movement in interest rates may exceed the amount that the bank may gain if rates move in a favorable direction. As a result, the bank may have more downside exposure than upside reward. For many banks, their written options positions leave them exposed to losses from both rising and falling interest rates. Some banks buy and sell options on a stand-alone basis. The option has an explicit price at which it is bought or sold and may or may not be linked with another bank product. A bank does not have to buy and sell explicitly priced options to incur option risk, however. Indeed, almost all banks incur option risk from options that are embedded or incorporated into retail bank products. These options are found on both sides of the balance sheet. On the asset side, prepayment options are the most prevalent embedded option. Most residential mortgage and consumer loans give the consumer an option to prepay with little or no prepayment penalty. Banks may also permit the prepayment of commercial loans by not enforcing prepayment penalties (perhaps to remain competitive in certain markets). A prepayment option is equivalent to having written a call option to the customer. When rates decline, customers will exercise the calls by prepaying loans, and the banks asset maturities will shorten just when the bank would like to be extending them. And when rates rise, customers will keep their mortgages, making it difficult for the bank to shorten asset maturities just when it would like to be doing so.

the loan, renegotiate the loan to a lower rate, or face a default on the loan. A banks non-maturity deposits, such as money market demand accounts (MMDAs), negotiable order of withdrawal (NOW) accounts, and savings accounts also may have implicit caps and floors on the rates of interest that the bank is willing to pay. Risk Measurement Accurate and timely measurement of interest rate risk is necessary for proper risk management and control. A banks risk measurement system should be able to identify and quantify the major sources of the banks interest rate risk exposure. The system also should enable management to identify risks arising from the banks customary activities and new businesses. The nature and mix of a banks business lines and the interest rate risk characteristics of its activities will dictate the type of measurement system required. Such systems will vary from bank to bank. Every risk measurement system has limitations, and systems vary in the degree to which they capture various components of interest rate exposure. Many well-managed banks will use a variety of systems to fully capture all of their sources of interest rate exposure. The three most common risk measurement systems used to quantify a banks interest rate risk exposure are re-pricing maturity gap reports, net income simulation models, and economic valuation or duration models. The following table summarizes the types of interest rate exposures that these measurement techniques address.

Do cu
118

On the deposit side of the balance sheet, the most prevalent option given to customers is the right of early withdrawal. Early withdrawal rights are like put options on deposits. When rates increase, the market value of the customers deposit declines, and the customer has the right to put the deposit back to the bank. This option is to the depositors advantage. As previously noted, bank managements discretion in pricing such retail products as non-maturity deposits can also be viewed as a type of option. This option usually works in the banks favor. For example, the bank may peg its deposits at rates that lag market rates when interest rates are increasing and that lead market rates when they are decreasing.

Bank products that contain interest caps or floors are other sources of option risk. Such products are often loans and may have a significant effect on a banks rate exposure. For the bank, a loan cap is like selling a put option on a fixed income security, and a floor is like owning a call. The cap or floor rate of interest is the strike price. When market interest rates exceed the cap rate, the borrowers option moves in the money because the borrower is paying interest at a rate lower than market. When market interest rates decline below the floor, the banks option moves in the money because the rate paid on the loan is higher than the market rate. Floating rate loans that do not have an explicit cap may have an implicit one at the highest rate that the borrower can afford to pay. In high rate environments, the bank may have to cap the rate on

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Banks with significant option risk may supplement these models with option pricing or Monte Carlo models. But for many banks, especially smaller ones, the expense of developing options pricing models would outweigh the benefits. Such banks should be able to use their data and measurement systems to identify and track, in a timely and meaningful manner, products that may create significant option risk. Such products may include non-maturity deposits, loans and securities with prepayment and extension risk, and explicit and embedded caps on adjustable rate loans. Bank management should understand how such options may alter the banks interest rate exposure under various interest rate environments. Regardless of the type and level of complexity of a banks measurement system, management should ensure that the system is adequate to the task. All measurement systems require a bank to gather and input position data, make assumptions about possible future interest rate environments and customer behavior, and compute and quantify risk exposure. To assess the

al

11D.571.3

adequacy of a banks interest rate risk measurement process, examiners should review and evaluate each of these steps. Gathering Data The first step in a banks risk measurement process is to gather data to describe the banks current financial position. Every measurement system, whether it is a gap report or a complex economic value simulation model, requires information on the composition of the banks current balance sheet. In modeling terms, gathering financial data is sometimes called providing the current position inputs. This data must be reliable for the risk measurement system to be useful. The bank should have sufficient management information systems (MIS) to allow it to retrieve appropriate and accurate information in a timely manner. The MIS systems should capture interest rate risk data on all of the banks material positions, and there should be sufficient documentation of the major data sources used in the banks risk measurement process. Bank management should be alert to the following common data problems of interest rate risk measurement systems:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

To obtain the detailed information necessary to measure interest rate risk, banks need to be able to tap or extract data from numerous and diverse transaction systems the base systems that keep the records of each transactions maturity, pricing, and payment terms. This means that the bank will need to access information from a variety of systems, including its commercial and consumer loan, investment, and deposit systems. The banks general ledger may also be used to check the integrity of balance information pulled from these transaction systems. Information from the general ledger system by itself, however, generally will not contain sufficient information on the maturity and repricing characteristics of the banks portfolios. Aggregation The amount of data aggregated from transaction systems for the interest rate risk model will vary from bank to bank and from portfolio to portfolio within a bank. Some banks may input each specific instrument for certain portfolios. For example, the cash flow characteristics of certain complex CMO or structured notes may be so transaction-specific that a bank elects to model or input each transaction separately. More typically, the bank will perform some preliminary data aggregation before putting the data into its interest rate risk model. This ensures ease of use and computing efficiency. Although most bank models can handle hundreds of accounts or transactions, every model has its limit. Because some portfolios contain numerous variables that can affect their interest rate risk, additional categories of information or less aggregated information may be required. For example, banks with significant holdings of adjustable rate mortgages will need to differentiate balances by periodic and lifetime caps, the reset frequency of mortgages, and the market index used for rate resets. Banks with significant holdings of fixed rate mortgages will need to stratify balances by coupon levels to reflect differences in prepayment behaviors. Developing Scenarios and Assumptions The second step in a banks interest rate risk measurement process is to project future interest rate environments and to measure the risk to the bank in these environments by determining how certain influences (cash flows, market and product interest rates) will act together to change prices and earnings. Unlike the first step, in which one can be certain about data inputs, here the bank must make assumptions about future events. For the risk measurement system to be reliable, these assumptions must be sound. A banks interest rate risk exposure is largely a function of (1) the sensitivity of the banks instruments to a given change in market interest rates and (2) the magnitude and direction of this change in market interest rates. The assumptions and interest rate scenarios developed by the bank in this step are usually shaped by these two variables. Some common problems in this step of the risk measurement process include:
Failing to assess potential risk exposures over a sufficiently

Incomplete data on the banks operations, portfolios, or

branches. Lack of information on off-balance-sheet positions and on caps and floors incorporated into bank loan and deposit products.
Inappropriate levels of data aggregation.

Information to Be Collected To describe the interest rate risk inherent in the banks current position, the bank should have, for every material type of financial instrument or portfolio, information on: with the instrument or portfolio.

The current balance and contractual rate of interest associated

The scheduled or contractual terms of the instrument or

portfolio in terms of principal payments, interest reset dates, and maturities.


For adjustable rate items, the rate index used for repricing (such

Do cu
Sources of Information
11D.571.3

as prime, Libor, or CD) as well as whether the instruments have contractual interest rate ceilings or floors.

A bank may need to collect additional information on certain products to provide a more complete picture of the banks interest rate risk exposure. For example, because the age or seasoning of certain loans, such as mortgages, may affect their prepayment speeds, the bank may need to obtain information on the origination date and interest rate of the instruments. The geographic location of the loan or deposit may also help the bank evaluate prepayment or withdrawal speeds. Some banks may use a tiered pricing structure for certain products such as consumer deposits. Under such pricing structures, the level and responsiveness of the rates offered for deposits will vary by the size of the deposit account. If the bank uses this type of pricing, it may need to stratify certain portfolios by account size. Since a banks interest rate risk exposure extends beyond its onbalance-sheet positions to include off-balance-sheet interest contracts and rate-sensitive fee income, the bank should include these items in its interest rate risk measurement process.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

wide range of interest rate movements to identify vulnerabilities and stress points.

al

119

Failing to modify or vary assumptions for products with

embedded options to be consistent with individual rate scenarios.


Basing assumptions solely on past customer behavior and

often move more slowly than market rates, including rates such as the banks prime rate, and rates it pays on consumer deposits. From these specifications, the bank develops interest rate scenarios over which exposures will be measured. The complexity of the actual scenarios used may range from a simple assumption that all rates move simultaneously in a parallel fashion to more complex rate scenarios involving multiple yield curves. Banks will generally use one of two methods to develop interest rate scenarios:
The deterministic approach. Using this common method, the

Do cu
its rate scenario. rates.
120

Banks should use interest rate scenarios with at least a 200-basispoint change-taking place in one year. Since 1984, rates have twice changed that much or more in that period of time. The OCC encourages banks to assess the impact of both immediate and gradual changes in market rates as well as changes in the shape of the yield curve when evaluating their risk exposure. The OCC also encourages banks to employ stress tests that consider changes of 400 basis points or more over a one-year horizon. Although such a shock is at the upper end of post-1984 experience, it was typical between 1979 and 1984. Banks with significant option risk should include scenarios that capture the exercise of such options. For example, banks that have products with caps or floors should include scenarios that assess how the banks risk profile would change should those caps or floors become binding. Some banks write large, explicitly priced interest rate options. Since the market value of options fluctuates with changes in the volatility of rates as well as with changes in the level of rates, such banks should also develop interest rate risk assumptions to measure their exposure to changes in volatility. Developing Rate Scenarios The method used to develop specific rate scenarios will vary from bank to bank. In building a rate scenario, the bank will need to specify:
The term structure of interest rates that will be incorporated in The basis relationships between yield curves and rate indices for example, the spreads between Treasury, Libor, and CD

The bank also must estimate how rates that are administered or managed by bank management (as opposed to those that are purely market driven) might change. Administered rates, which

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

performance without considering how the banks competitive market and customer base may change in the future.
Failing to periodically reassess the reasonableness and accuracy

of assumptions. Future Interest Rate Assumptions A bank must determine the range of potential interest rate movements over which it will measure its exposure. Bank management should ensure that risk is measured over a reasonable range of potential rate changes, including meaningful stress situations. In developing appropriate rate scenarios, bank management should consider a variety of factors such as the shape and level of the current term structure of interest rates and the historical and implied volatility of interest rates. The bank should also consider the nature and sources of its risk exposure, the time it would realistically need to take actions to reduce or unwind unfavorable risk positions, and bank managements willingness to recognize losses in order to reposition its risk profile. Banks should select scenarios that provide meaningful estimates of risk and include sufficiently wide ranges to allow management to understand the risk inherent in the banks products and activities.

bank specifies the amount and timing of the rate changes to be evaluated. The risk modeler is determining in advance the range of potential rate movements. Banks using this approach will typically establish standard scenarios for their risk analysis and reporting, based on estimates of the likelihood of adverse interest rate movements. The bank may also include an analysis of its exposure under a most likely or flat rate scenario for comparative purposes. These standard rate scenarios are then supplemented periodically with stress test scenarios. The number of scenarios used may range from three (flat, up, down) to 40 or more. These scenarios may include rate shocks, in which rates are assumed to move instantaneously to a new level, and rate ramps, where rates move more gradually. Banks may use parallel and nonparallel yield curve shifts, with tests for yield curve twists or inversions. Models using deterministic rate scenarios generate an indicator of risk exposure for each rate scenario by highlighting the difference in net income between the base case and other scenarios. For example, the model may estimate the level of net income over the next 12 months for each rate scenario. Results often are displayed in a matrix-type table with exposures for base, high, and low rate scenarios. mortgage pricing applications, this method employs a model to randomly generate interest rate scenarios, and thousands of individual interest rate scenarios or paths are evaluated. Models using this approach generate a distribution of outcomes or exposures. Banks use these distributions to estimate the probabilities of a certain range of outcomes. For example, the bank may want to have 95 percent confidence that the banks net income over the next 12 months will not decline by more than a certain amount.

The stochastic approach. Developed out of options and

Behavioral and Pricing Assumptions When assessing its interest rate risk exposure, a bank also must make judgments and assumptions about how an instruments actual maturity or repricing behavior may vary from the instruments contractual terms. For example, customers can change the contractual terms of an instrument by prepaying loans, making various deposit withdrawals, or closing deposit accounts (deposit runoffs). The bank must assess the likelihood that customers will elect to exercise these options. These likelihoods will generally vary with each interest rate scenario. In addition, a banks vulnerability to customers exercising embedded options in retail assets and liabilities will vary from bank to bank because of differences in customer bases and demographics, competition, pricing, and business philosophies. Assumptions are especially important for products that have unspecified repricing dates, such as demand deposits, savings,

al

11D.571.3

NOW and MMDA accounts (nonmaturity deposits), and credit card loans. Management must estimate the date on which these balances will reprice, migrate to other bank products, or run off. In doing so, bank management needs to consider many factors such as the current level of market interest rates and the spread between the banks offering rate and market rates; its competition from banks and other firms; its geographic location and the demographic characteristics of its customer base. A banks assumptions need to be consistent and reasonable for each interest rate scenario used. For example, assumptions about mortgage prepayments should vary with the rate scenario and reflect a customers economic incentives to prepay the mortgage in that interest rate environment. A bank should avoid selecting assumptions that are arbitrary and not verified by experience and performance. Typical information sources used to help formulate assumptions include:
Historical trend analysis of past portfolio and individual account

based analyses. The banks key assumptions and their impact should be reviewed by the board, or a committee thereof, at least annually. Computing Risk Levels The third step in a banks risk measurement process is the calculation of risk exposure. Data on the banks current position is used in conjunction with its assumptions about future interest rates, customer behavior, and business activities to generate expected maturities, cash flows, or earnings estimates, or all three. The manner in which risk is quantified will depend on the methods of measuring risk.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

The model no longer captures all material sources of a banks

behavior.

Bank- or vendor-developed prepayment models. Dealer or vendor estimates.

Managerial and business unit input about business and pricing

strategies.

Bank management should ensure that key assumptions are evaluated at least annually for reasonableness. Market conditions, competitive environments, and strategies change over time, causing assumptions to lose their validity. For example, if the banks competitive market has changed such that consumers now face lower transaction costs for refinancing their residential mortgages, prepayments may be triggered by smaller reductions in market interest rates than in the past. Similarly, as bank products go through their life cycle, bank managements business and pricing strategies for the product may change.

Do cu
11D.571.3

A banks review of key assumptions should include an assessment of the impact of those assumptions on the banks measured exposure. This type of assessment can be done by performing what-if or sensitivity analyses that examine what the banks exposure would be under a different set of assumptions. By conducting such analyses, bank management can determine which assumptions are most critical and deserve more frequent monitoring or more rigorous methods to ensure their reasonableness. These analyses also serve as a type of stress test that can help management to ensure that the banks safety and soundness would not be impaired if future events vary from managements expectations.

Management should document the types of analyses underlying key assumptions. Such documents, which usually briefly describe the types of analyses, facilitate the periodic review of assumptions. It also helps to ensure that more than one person in the organization understands how assumptions are derived. The volume and detail of that documentation should be consistent with the significance of the risk and the complexity of analysis. For a small bank, the documentation typically will include an analysis of historical account behavior and comments about pricing strategies, competitor considerations, and relevant economic factors. Larger banks often use more rigorous and statistically

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

interest rate risk exposure. Banks that have not updated risk measurement techniques for changes in business strategies and products or acquisition and merger activities can experience this problem. Bank management does not understand the models methods and assumptions. Banks that purchase a vendor model and fail to obtain current user gudes and source documents that describe the models implied assumptions and calculation methods may misinterpret model results or have difficulties with the measurement system.
Only one person in the bank is able to run and maintain the

risk measurement system. Should that person leave the bank, the institution may not be able to generate timely and accurate estimates of its risk exposure. More than one person, when possible, should have detailed knowledge of the measurement system.

Calculating Risk to Reported Earnings The OCC expects all national banks to have systems that enable them to measure the amount of earnings that may be at risk from changes in interest rates. Calculating a banks reported earnings-at-risk is the focus of many commonly used interest rate risk models. When measuring risk to earnings, these models typically focus on:
Net interest income, or the risk to earnings arising from accrual

accounts. This part of a banks interest rate risk model is similar to a budget or forecasting model. The model multiplies projected average rates by projected average balances. The projected average rates and balances are derived from the banks current positions and its assumptions about future interest rates, maturities and repricings of existing positions, and new business assumptions. (i.e., price risk). This calculation is often performed in a separate market valuation model or subsystem of the interest rate risk model. In essence, these models project all expected future cash flows and then discount them back to a present value. The model measures exposure by calculating the change in net present values under different interest rate scenarios.

Mark-to-market gains or losses on trading or dealing positions

Rate-sensitive fee income, or the risk to earnings arising from

interest sensitive fee income or operating expenses. Examples include mortgage servicing fees and income arising from credit card securitization.
121

al

Some banks encounter the following problems when using risk measurement systems:

Calculating Risk to Capital Banks that have significant medium- and long-term positions should be able to assess the long-term impact of changes in interest rates on the earnings and capital of the bank. Such an assessment affords the economic perspective or EVE. The appropriate method for assessing a banks long-term exposures will depend on the maturity and complexity of the banks assets, liabilities, and of balance- sheet activities. That method could be a gap report covering the full maturity range of the banks activities, a system measuring the economic value of equity, or a simulation model. To determine whether a bank needs a system that measures the impact of long-term positions on capital, examiners should consider the banks balance sheet structure and its exposure to option risk. For example, a bank with more than 25 percent of total assets in long-term, fixed rate securities and comparatively little in nonmaturity deposits or long-term funding may need to measure the long-term impact on the economic value of equity. If a bank is invested mainly in short-term securities and working capital loans and funded chiefly by short-term deposits, it probably would not. Banks can measure the volatility of long-term interest rate risk exposures using a variety of methods. For example, a bank that is considerably exposed to intermediate-term (three to five years) interest rate risk may elect to expand its earnings-at-risk analysis beyond the traditional one- to two-year time period. Gap reports that reflect a variety of rate scenarios and that provide sufficient detail in the timing of long-term mismatches may also be used to measure long-term interest rate risk.

Deterministic models, in contrast, view an option unrealistically as riskless until the predetermined rate path rises above the strike price, at which point the exposure estimate suddenly becomes very large. Risk Monitoring Interest rate risk management is a dynamic process. Measuring the interest rate exposure of current business is not enough; a bank should also estimate the effect of new business on its exposure. Periodically, institutions should reevaluate whether current strategies are appropriate for the banks desired risk profile. Senior management and the board should have reporting systems that enable them to monitor the banks current and potential risk exposure and to ensure that those levels are consistent with their stated objectives. Evaluating and Implementing Strategies Well-managed banks look not only at the risk arising from their existing business but also at exposures that could arise from expected business growth. In their risk-to-earnings analyses, they may make assumptions about the type and mix of activities and businesses as well as the volume, pricing, and maturities of future business. Typically, strategic business plans, marketing strategies, annual budgets, and historical trend analyses help banks to formulate these assumptions. Some banks may also include new business assumptions in analyzing the risk to the banks economic value. To do so, a bank first quantifies the sensitivity of its economic value of equity (EVE) to the risks posed by its current positions. Then it recomputes its EVE sensitivity as of a future date, under a projected or pro forma balance sheet. Although new business assumptions introduce yet another subjective factor to the risk measurement process, they help bank management to anticipate future risk exposures. When incorporating assumptions about new and changing business mix, bank management should ensure that those assumptions are realistic for the rate scenario being evaluated and are attainable given the banks competition and overall business strategies. In particular, bank management should avoid overly optimistic assumptions that serve to mask the banks interest rate exposure arising from its existing business mix. For example, to improve its earnings under a rising interest rate scenario, bank management may want to increase the volume of its floating rate loans and decrease its fixed rate loans. Such a restructuring, however, may take considerable time and effort, given the banks overall lending strategies, customer base, and customer preferences. Larger banks typically monitor their interest rate risk exposure frequently and develop strategies to adjust their risk exposures. These adjustments may be decisions to buy or sell specific instruments or from certain portfolios, strategic decisions for business lines, maturity or pricing strategies, and hedging or risk transformation strategies using derivative instruments. The banks interest rate risk model may be used to test or evaluate strategies before implementation. Special subsystems or models may be employed to analyze specific instruments or strategies, such as derivative transactions. The results from these models are entered into the overall interest rate risk model. Examiners should review and discuss with bank management how the bank evaluates potential interest rate risk
11D.571.3

The OCC encourages banks with significant interest rate risk exposures to augment their earnings-at-risk measures with systems that can quantify the potential effect of changes in interest rates on their economic value of equity. With few exceptions, larger national banks engaging in complex on- and of balance- sheet activities need such measurement systems. To quantify its economic value of equity exposure, a bank generally will use either duration-based models (where duration is a proxy for market value sensitivity) or market (economic) valuation models. These models are essentially a collection of present value calculations that discount the cash flows derived from the current position and assumptions for a specified interest rate scenario.

Do cu
122

Static discounted cash flow models are associated with deterministic models. In deterministic models, the user designates an interest rate scenario, and the model generates an exposure estimate for the scenario. Stochastic models use rate scenarios that are randomly generated. Exposure estimates are then generated for each scenario, and an estimate of expected value can be calculated from the distribution of estimates. Although stochastic models require more expertise and computing power than deterministic models, they provide more accurate risk estimates. Specifically, stochastic models produce more accurate estimates for options and products with embedded options. The value of most options increases continually as interest rates approach the options strike rates, and the probability of the option going into the money likewise increases continually. Stochastic models capture this effect because they calculate an expected value of future cash flows derived from a distribution of rate paths.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

exposures of new products or future business plans. Examiners should assess whether the banks assumptions about new business are realistic and attainable. In addition, examiners should review the banks interest rate risk strategies to determine whether they meet or are consistent with the stated goals and objectives of senior management and the board. Interest Rate Risk Reporting Banks should have an adequate system for reporting risk exposures. A banks senior management and its board or a board committee should receive reports on the banks interest rate risk profile at least quarterly. More frequent reporting may be appropriate depending on the banks level of risk and the likelihood of its level of risk changing significantly. These reports should allow senior management and the board or committee to do the following:
Evaluate the level and trends of aggregate interest rate risk

Among the items that an audit should review and validate are:
The appropriateness of the banks risk measurement system(s)

exposure.

Evaluate the sensitivity of key assumptions, such as those

dealing with changes in the shape of the yield curve or in the speed of anticipated loan prepayments or deposit withdrawals.
Evaluate the trade-offs between risk levels and performance.

When management considers major interest rate strategies (including no action), they should assess the impact of potential risk (an adverse rate movement) against that of the potential reward (a favorable rate movement). Verify compliance with the boards established risk tolerance levels and limits and identify any policy exceptions. level of interest rate risk being taken.

Determine whether the bank holds sufficient capital for the

The reports provided to the board and senior management should be clear, concise, and timely and provide the information needed for making decisions. Reports to the board should also cover control activities. Such reports include (but are not limited to) audit reports, independent valuations of products used for interest rate risk management (e.g., derivatives, investment securities), and model validations comparing model predictions to performance. Risk Control A banks internal control structure ensures the safe and sound functioning of the organization generally and of its interest rate risk management process in particular. Establishing and maintaining an effective system of controls, including the enforcement of official lines of authority and appropriate separation of duties, is one of managements more important responsibilities. Persons responsible for evaluating risk monitoring and control procedures should be independent of the function they review. Key elements of the control process include internal review and audit and an effective risk limit structure. Auditing the Interest Rate Risk Measurement Process Banks need to review and validate each step of the interest rate risk measurement process for integrity and reasonableness. This review is often performed by a number of different units in the organization, including ALCO or treasury staff (regularly and routinely), and a risk control unit that has oversight

Do cu

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

given the nature, scope, and complexity of its activities. model. This includes verifying that balances and contractual terms are correctly specified and that all major instruments, portfolios, and business units are captured in the model. The review also should investigate whether data extracts and model inputs have been reconciled with transactions and general ledger systems. It is acceptable for parts of the reconcilement to be automated; e.g, routines may be programmed to investigate whether the balances being extracted from various transaction systems match the balances recorded on the banks general ledger. Similarly, the model itself often contains various audit checks to ensure, for example, that maturing balances do not exceed original balances. ALCO, audit staffs, or both may also perform more detailed, periodic audit tests of specific portfolios. The audit function should review the appropriateness of the interest rate scenarios as well as customer behaviors and pricing/ volume relationships to ensure that these assumptions are reasonable and internally consistent. For example, the level of projected mortgage prepayments within a scenario should be consistent with the level of interest rates used in that scenario. Generally this will mean using faster prepayment rates in declining interest rates scenarios and slower prepayment rates in rising rate scenarios. An audit should review the statistical methods that were used to generate scenarios and assumptions (if applicable), and whether senior management reviewed and approved key assumptions.

The accuracy and completeness of the data inputs into the

The reasonableness and validity of scenarios and assumptions.

The audit or review also should compare actual pricing spreads and balance sheet behavior to model assumptions. For some instruments, such as residential mortgage loans, estimates of value changes can be compared with market value changes. Unfavorable results may lead the bank to revise model relationships such as prepayment and pricing behaviors. The validity of the risk measurement calculations. The validity of the model calculations is often tested by comparing actual with forecasted results. When doing so, banks will typically compare projected net income results with actual earnings. Reconciling the results of economic valuation systems can be more difficult because market prices for all instruments are not always readily available, and the bank does not routinely mark
123

al

responsibility for interest rate risk modeling. Internal and external auditors also can periodically review a banks process. At smaller banks, external auditors or consultants often perform this function. Examiners should identify the units or individuals responsible for auditing important steps in the interest rate risk measurement process. The examiner should review recent internal or external audit work papers and assess the sufficiency of audit review and coverage. The examiner should determine in particular whether an appropriate level of senior management or staff periodically reviews and validates the assumptions and structure of the banks interest rate risk measurement process. Management or staff performing these reviews should be sufficiently independent from the line units or individuals who take or create interest rate risk.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

all of its balance sheet to market. For instruments or portfolios with market prices, these prices are often used to benchmark or check model assumptions. The scope and formality of the measurement validation will depend on the size and complexity of the bank. At large banks, internal and external auditors may have their own models against which the banks model is tested. Larger banks and banks with more complex risk profiles and measurement systems should have the model or calculations audited or validated by an independent source either an internal risk control unit of the bank, auditors, or consultants. At smaller and less complex banks, periodic comparisons of actual performance with forecasts may be sufficient. Risk Limits The banks board of directors should set the banks tolerance for interest rate risk and communicate that tolerance to senior management. Based on these tolerances, senior management should establish appropriate risk limits that maintain a banks exposure within the boards risk tolerances over a range of possible changes in interest rates. Limit controls should ensure that positions that exceed predetermined levels receive prompt management attention. A banks limits should be consistent with its overall approach to measuring interest rate risk and should be based on its capital levels, earnings performance, and risk tolerance. The limits should be appropriate to the size, complexity, and capital adequacy of the bank and address the potential impact of changes in market interest rates on both reported earnings and the banks economic value of equity (EVE).

earnings (in dollars or percent) over a specified time horizon and rate scenario. Banks typically compute their earnings-at-risk limits relative to one of the following target accounts: net interest income (NII), pre-provision net income (PPNI), net income (NI), or earnings per share (EPS). The appropriate target account may vary and generally depends upon the nature and sources of the banks earnings exposure. For some banks, most if not all of their earnings volatility will occur in their net interest margin. For these banks, NII may be an appropriate target. In constructing a limit based on NII, however, bank management should consider and understand how variations in its margin may affect its bottom-line earnings performance. A bank with substantial overhead expenses, for example, may find that relatively small variations in its margin result in significant changes to its net income. Banks with significant non-interest income and expense items that are sensitive to interest rates generally should consider a more bottom-line-oriented targeted account, such as NI or EPS. Capital-At-Risk (EVE) Limits A banks EVE limits should reflect the size and complexity of its underlying positions. For banks with few holdings of complex instruments and low risk profiles, simple limits on permissible holdings or allowable repricing mismatches in intermediate- and long-term instruments may be adequate. At more complex institutions, more extensive limit structures may be necessary. Banks that have significant intermediate- and longterm mismatches or complex options positions should establish limits to restrict possible losses of economic value or capital. Gap Limits Gap (maturity or repricing) limits are designed to reduce the potential exposure to a banks earnings or capital from changes in interest rates. The limits control the volume or amount of repricing imbalances in a given time period. These limits often are expressed by the ratio of rate-sensitive assets (RSA) to rate-sensitive liabilities (RSL) in a given time period. A ratio greater than one suggests that the bank is asset-sensitive and has more assets than liabilities subject to repricing. All other factors being constant, falling interest rates generally will reduce the earnings of such a bank. An RSA/RSL ratio less than one means that the bank is liability-sensitive and that rising interest rates may reduce its earnings. Other gap limits that banks use to control exposure include gap-to-assets ratios, gap-to-equity ratios, and dollar limits on the net gap. Although gap ratios may be a useful way to limit the volume of a banks repricing exposures, the OCC does not believe that, by themselves, they are an adequate or effective method of communicating the banks risk profile to senior management or the board. Gap limits are not estimates of the earnings (net interest income) that the bank has at risk. A bank that relies solely on gap measures to control its interest rate exposure should explain to its senior management and board the level of earnings and capital at risk that are implied by its gap exposures (imbalances).

Many banks will use a combination of limits to control their interest rate risk exposures. These limits include primary limits on the level of reported earnings at risk and economic value at risk (for example, the amount by which net income and economic value may change for a given interest rate scenario) as well as secondary limits. These secondary limits form a second line of defense and include more traditional volume limits for maturities, coupons, markets, or instruments.

Do cu
124

Pricing policies may also control the creation of interest rate risk exposures and internal funds transfer pricing systems. Funds transfer systems typically require line units to obtain funding prices from the banks treasury unit for large transactions. Those funding prices generally reflect the cost that the bank would incur to hedge or match-fund the transaction. Examiners should identify and evaluate the types of limits the bank uses to control the risk to earnings and capital from changes in interest rates. In particular, the examiner should determine whether the risk limits are effective methods for controlling the banks exposure and complying with the boards expressed risk tolerances. The examiner also should assess the appropriateness of the level of risk allowed under the banks risk limits in view of the banks financial condition, the quality of its risk management practices and managerial expertise, and its capital base. Earnings-At-Risk Limits Earnings-at-risk limits are designed to control the exposure of a banks projected future reported earnings in specified rate scenarios. A limit is usually expressed as a change in projected

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 23: INTEREST RATE RISK MODELS


Common Interest Rate Risk Models
Gap Reports Gap reports are commonly used to assess and manage interest rate risk exposure specifically, a banks repricing and maturity imbalances. However, as explained later in this appendix, a basic gap report can be an unreliable indicator of a banks overall interest rate risk exposure. Although a simple gap report does not identify and quantify basis risk, yield curve risk, and option risk, bankers have modified gap reports to do so. Gap reports stratify all of a banks assets, liabilities, and off-balance-sheet instruments into maturity segments (time bands) based on the instruments next repricing or maturity date. Balances within a time band are then summed (assets are reported as positive amounts and liabilities as negative amounts) to produce a net gap position for each time band. Risk is measured by the size of the gap (the amount of net imbalance within a time band) and the length of time the gap is open.

Using properly prepared gap reports, a bank can identify and measure short and long-term repricing imbalances. With this information, a bank can estimate its earnings and economic risks within certain constraints. Gap reports can be particularly useful in identifying the repricing risk of a banks current balance sheet structure before assumptions are made about new business or how to effectively reinvest maturing balances. Within a given time band, a bank may have a positive, negative, or neutral gap. A bank will have a positive gap when more assets reprice or mature than liabilities. Because this bank has more assets than liabilities subject to repricing, the bank is said to be asset sensitive for that time band. An asset sensitive bank is generally expected to benefit from rising interest rates because its assets are expected to reprice more quickly than its liabilities.

Do cu
11D.571.3

A bank has a negative gap and is liability sensitive when more liabilities reprice within a given time band than assets. A bank that is liability-sensitive, such as the bank described in the gap report in table 1, usually benefits from falling interest rates. (The gap report in table 1 is a simplified example. In practice, most gap reports will contain many more line items and additional time bands.)

A bank whose assets equal liabilities within a time band is said to have a neutral gap position. A bank in a neutral gap position is not free of exposure to changes in interest rates, however. Although the banks repricing risk may be small, it can still be exposed to basis risk or changes in rate relationships. Traditionally, most bankers have used gap report information to evaluate how a banks repricing imbalances will affect the sensitivity of its net interest income for a given change in interest rates. The same repricing information, however, can be used to assess the sensitivity of a banks net economic value to a change in interest rates

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Construction of a Gap Report As a general rule, all assets, liabilities, and off-balance-sheet items should be included in a banks gap report. Less complex banks should, at a minimum, include all earning assets and interest-paying liabilities in their gap reports. A bank also should consider including potential repricings or maturities of all non-earning assets and non-interest-bearing liabilities in its reports. Non-earning assets such as non-accrual loans, for example, may at some point be collected or renegotiated, and then become repriceable. Non-interest-bearing liabilities (demand deposit account balances) also should be included in a banks gap report even though such deposits do not bear an explicit rate of interest. Such deposits are included because their maturity or run-off exposes the bank to interest rate risk. (The bank may need to replace the deposits with interest bearing sources of funds such as NOW accounts, certificates of deposits, or federal funds purchased.) If the bank operates significant books in currencies other than the dollar, it should prepare a separate gap report for each book. Why? Interest rates in different countries can move in different directions, and the volatility of such interest rates can differ considerably as well. A significant currency book would be one that represents at least 10 percent of total business. Many banks avoid open positions or repricing imbalances in their foreign currency books. If this is the banks policy, gap reports for those currencies may not be needed. Number of Time Bands A bank must decide how many time bands it will use in its gap report. In general, the narrower the time bands, the more accurate the risk measures. To measure risk to earnings, the report should have at least monthly detail over the first year and quarterly over the second. If a gap report is used to capture long-term exposures and risk to economic value, the time bands should extend to the maturity of the last asset or liability.

al
125

Conversely, if the position generally increases in value when interest rates rise (e.g., short futures, pay-fixed swap, short call option, and long put option positions), the first entry is positive and the second is negative. This slotting reflects the impact of an offbalance-sheet instrument on the effective maturity of an asset on the balance sheet.

Do cu
126

For example, if a bank has a $100 million five-year interest swap in which it receives a fixed rate and pays three-month Libor, the bank would report a positive $100 million in the five-year time band and a negative $100 million in the three-month time band. This treatment reflects the fact that the bank is long a fixed rate payment (as if it owned a fixed rate asset) and short a floatingrate payment (as if it had a floating-rate liability).

A long futures position would increase a banks asset maturity, while a short futures position would decrease its asset maturity. Hence, a long position in a 10-year Treasury note future that expires in five months would be reported as a negative entry in the time band that covers five-month maturities and a positive entry in the time band that covers a 10-year instrument. As discussed in the next section, option instruments such as caps and floors pose special problems for gap reports. Because most gap reports usually assume a static interest rate environment at the current level of interest rates, they ignore caps and floors until the strike rate is hit. Suppose a bank has a long position in a 10-year interest rate cap. Before the strike rate is hit, the report would show the position as a floating rate liability and would ignore the cap; after the strike rate is hit, the position becomes a 10-year fixed rate liability. Reporting of Options-Related Positions Many consumer products have embedded options in them because the customer has the right to change the terms of a

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Time bands for distant time periods, say, beyond 10 years, may be relatively wide five years, for example. These wider time frames are justified because the change in interest rate sensitivity is small for maturities beyond 10 years. In other words, a banks use of wide time bands beyond 10 years will not usually cause it to misestimate its interest rate risk exposure for items in those time bands. Reporting of Off-Balance-Sheet Items A gap report that does not include off-balance-sheet interest rate positions does not fully measure a banks interest rate risk profile. All material positions in off-balance-sheet instruments whose value can be affected by interest rates should be captured in a gap report. Such instruments include interest rate contracts, such as swaps, futures, and forwards; option contracts, such as caps, floors, and options on futures; and firm forward commitments to buy or sell loans, securities, or other financial instruments. Off-balance-sheet instruments are often reported in a gap report using two entries to reflect how the instruments alter the timing of cash flows. The two entries of the contract are offsetting: one entry is the notional principal amount of the contract reported as a positive dollar value, and the other is an offsetting negative entry. If the off-balance-sheet position generally increases in value when interest rates fall (e.g., long futures, pay-floating swap, long call option, and short put option positions), the first entry is reported with a negative value and the second entry is reported with a positive value.

contract or to act when warranted by market conditions. When a customer exercises the option, the bank loses a valuable asset that will no longer pay interest. Since these products are germane to a banks interest rate risk exposure, institutions should incorporate them into their gap reports. In a product with an embedded option, the cash flows will depend on the path of interest rates; different interest rate paths need to be considered because the dates of the options exercise will change accordingly, affecting cash flows. A single gap report gives an incomplete picture of products with embedded options because it allows for only one repricing date. Three methods of incorporating options exposures into gap reports are popular with banks. An examiner encountering a bank using another method should analyze the approach to determine whether it properly incorporates the asymmetrical impact of options on future net interest income and economic value. The first method either recognizes that the cap is in full effect for the remaining life of the product or ignores it for that same period. The following example illustrates this all-or-nothing approach to a cap on a floating rate loan: The bank has a 10-year $100,000 floating rate loan that reprices every six months but is subject to a 12 percent lifetime cap (the rate on the loan cannot exceed 12 percent). The all-or-nothing approach would consider the loan a six-month floating rate loan when rates are below 12 percent. If rates equal or exceed 12 percent, the loan becomes a fixed rate loan with a 10-year repricing maturity. This approach has several weaknesses. First, the method does not correctly reflect the exposure of net interest income to future changes in interest rates. For example, when the loan is slotted as a six-month repricing asset and funded with a six-month CD, the gap report would not indicate any interest rate risk. If interest rates were to rise above 12 percent, however, the loan could not reprice further but the funding costs on the CD could continue to rise, and interest rate margins would decline. Second, this treatment does not suggest how this exposure may be hedged. Neither hedging the asset as a six-month floating rate asset nor hedging it as a 10-year fixed rate asset would be appropriate. A better approach would be for the bank to prepare two gap reports, one for a high-rate scenario and the other for a low-rate scenario. Under the high-rate scenario, the cap would be binding and the gap report would show the capped loans as fixed rate assets. Under the low-rate scenario, the gap report would show the loan as a floating rate asset. A bank could use similar approaches to measure prepayment option risks associated with fixed rate residential mortgage loans. Under the high-rate scenario, the weighted average lives of the fixed rate mortgages would be extended in the gap report, reflecting the effect of slower prepayments. Under the low-rate scenario, the weighted lives would be shortened, reflecting faster prepayments. Comparing the gaps between the two schedules provides an indication of the amount of option risk the bank faces. Although this second method provides a way to assess how embedded options may alter a banks repricing imbalances under alternative interest rate scenarios, it also has limitations. Like the all-or-nothing approach, this method suggests that an option has value only when it becomes binding

al

11D.571.3

or is in the money. In reality, an option has value throughout its life. The value of the option will depend on such factors as the time to expiration of the option, the distance from the strike price, and the volatility of interest rates. A third approach for incorporating options into gap reports varies the value of the option according to the change in the value of the underlying instrument. Incorporating the delta-equivalent value of the option into the gap report does this. The delta-equivalent value of an option, a mathematically derived weighting between 0 percent and 100 percent, reflects the probability that the option will go in the money. In the illustration of the loan with the 12 percent lifetime cap described above, the bank could strip the cap from the loan and treat the cap and loan as two separate instruments. The bank would report the loan as a six-month floating rate loan and the cap as an off-balance-sheet instrument, based on the caps deltaequivalent value. The delta-equivalent value would equal the delta of the cap times the notional value of the cap (in this case, the principal amount of the loan, or $100,000). The cap in this example would have a delta between 50 percent and 100 percent when rates are greater than 12 percent. The high level of the delta indicates a high probability of the cap being effective over the life of the loan. If market rates were at 8 percent, however, the delta would be much lower, reflecting a lower probability that the cap will be effective over the life of the loan. It is important to stress that this method of measuring a banks net interest income at risk is very crude and employs numerous simplifying assumptions, including the following: All repricing and maturities within a time band occur simultaneously (as in the above formula), typically at the beginning, middle, or end of the period.
All maturing assets and liabilities are reinvested at overnight

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

The delta approach also has limitations. The delta of an option changes in a nonlinear fashion with the passage of time and with the level of interest rates. As a result, the delta value of an option is valid only for small changes in interest rates, and this value changes over time. Measuring Risk to Net Interest Income After a bank has stratified the banks assets, liabilities, and offbalance-sheet instruments into time bands and determined how it will treat embedded options, it must measure net interest income (NII) at risk. The formula to translate gaps into the amount of net interest income at risk, measuring exposure over several periods, is:

Do cu
11D.571.3

(Periodic gap) x (change in rate) x (time over which the periodic gap is in effect) = change in NII Applying it to the sample gap report shown in table 1 and calculating the change in the banks net interest income for an immediate 200-basis-point increase in rates can illustrate this formula. For example, the bank has a negative gap of $20 million in the one-month to three-month time band. This means that more liabilities than assets will reprice or mature during this time frame. Hence, for the remaining 10 months of the banks 12month time horizon, the bank will have $20 million more of liabilities than assets that have reprised at higher (200 basis points higher) rates. As shown in table 2, the increase in rates reduces the banks earnings for the 10-month period by approximately $333,000. The cumulative earnings effect of the banks repricing imbalances over the 12-month horizon is a reduction in net interest income of approximately $362,500.

ww Co w.p m dfw P iza D rd. F com Tr i


rates.
No other new business is booked.

There is an instantaneous change in the overnight rate to a new

and constant level.

All interest rates move the same amount. Using simulation

models can test the sensitivity of the results to these assumptions.

Measuring Risk to Economic Value Gap reports may be used to measure the exposure of a banks net economic value to a change in interest rates. To do so, a bank multiplies the balances in each time band by a price sensitivity factor that approximates, for a given change in interest rates, the percentage change in the present value of an instrument with similar cash flow and maturity characteristics. For example, consider a bank that has $10 million of two-year Treasury notes slotted in the time band covering from two years to three years in its gap report. To estimate the market value sensitivity of those balances to a 200-basis-point increase in market interest rates, a banker would multiply those balances by a factor that approximates the change in the present value of a two-year Treasury note for a 200-basis-point movement in rates. The present value of a note with a 7.5 percent coupon would decline 3.6 percent for such a rate movement. Hence, the estimated decline in the market value of the banks $10 million two-year Treasury note would be approximately $360,000 ($10 million times negative 3.6 percent). Similar price sensitivity factors can be applied to other types of instruments and time bands. The exposure of the banks net economic value would be the sum of the weighted balances. Limitations of Gap Reports Basis Risk The focus of a gap report is on the level of net repricings. The assumption is that within a given time band, assets and liabilities fully offset or hedge each other. In practice, however, assets and liabilities price off different yield curves or indices and do not move at all points together. To facilitate an
127

Copy Right: Rai University

al

interpretation of basis risk, some bankers group instruments with similar basis relationships into separate line items within the report and report average rates and yields on those groups. For example, within a 30- to 60-day time band, the repricing imbalance for accounts tied to CD rates could be reported as one line item, followed by balances tied to the Treasury curve. This approach provides a rough approximation of the degree of basis risk present in the balance sheet. Alternatively, some banks prepare beta-adjusted gap reports in an attempt to measure basis risk. In this type of report, the repricing balance for each account type is multiplied by a factor that approximates the correlation between that accounts pricing behavior and a benchmark market interest rate. For example, the report could compare the pricing behavior for all accounts to the federal funds rate. If the analysis revealed that the banks pricing on money market deposit accounts moves 50 basis points for every 100-basis-point movement in the federal funds rate, 50 percent of such balances would be shown as short-term rate-sensitive, and the remaining balances would be assigned a longer maturity. Even beta-adjusted gap reports, however, do not always provide a complete picture of a banks basis risk because the correlation between account pricing and market interest rates may not be the same for rising and declining interest rate environments or even for similar rate environments at different points in time. In such cases, a bank may need to formulate different correlations or beta factors for each rate scenario it develops. Given the limitations of gap reports, intuition and judgment are required when using them to quantify the exposure of earnings to changes in interest rates. Yield Curve Risk To measure a banks cumulative repricing risk over several periods or time bands, most users of gap reports simply sum the gaps across each time band to produce a net cumulative gap position. Implicit in this act is an assumption that movements in interest rates will be perfectly correlated across the time bands and will move in a parallel fashion. Applying different weights to each time band can amend this assumption. For example, gaps in the shorter time bands could be weighted more heavily than those in the longer time bands because short-term interest rates are usually more volatile and usually move by larger amounts than long-term rates.

Do cu
128

The pattern of a banks repricing gaps across the various time bands can provide an indication of the banks exposure to changes in yield curve shapes. Suppose a bank that is liability sensitive (has negative gaps) in the short- and long-term time bands and asset sensitive in the intermediate time bands is exposed to a flattening of the yield curve when short-term rates go up and long-term rates remain stable. The banks net interest margin deteriorates as the rates on its short-term liabilities increase. Because long-term rates remain stable, however, the market value of its long-term liabilities remains constant. Hence, the bank will not benefit from a decline in the expected future value of its long-term obligations. Option Risks As noted in earlier discussions, it is difficult to capture option risks with gap reports. Options introduce an asymmetrical and nonlinear element to a banks risk profile. Although techniques such as preparing multiple gap reports and reporting options by their delta-equivalent values attempt to overcome some of

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

these weaknesses, they are unable to fully capture all of the dimensions of option risk. To do so, a bank that has significant option risk must supplement its gap reports with simulation or option pricing models. Intra-Period Gaps Although gap reports rely on stratifying balances into broad time bands, they do not detect imbalances within those bands. Some bankers have partly overcome this weakness by reporting the weighted average repricing maturity within each time band. Another method is to reduce the width of the bands. New Business Many gap reports used by banks consider only the banks current financial positions. These reports are called static reports because they capture only the risk that arises from the banks existing balance sheet structure and do not incorporate any assumptions about new business. Some banks may also prepare dynamic gap reports. Typically, these reports are generated from the banks earnings simulation models and show how the banks gap would appear at some point in the future, after new business assumptions are incorporated into the risk measure. Bank Simulation Models Simulation models may be used for measuring interest rate risk arising from current and future business scenarios. They can be used to measure risk from either an earnings or economic perspective. The models simulate or project a banks risk exposure under a variety of assumptions and scenarios and, thus, can be used to isolate sources of a banks risk exposure or quantify certain types of risk. To do so, a bank performs a series of simulations and applies different assumptions and scenarios to each simulation. In general, earnings simulation models are more dynamic than gap analyses and market valuation simulations. Whereas gap and market valuation models generally take a snapshot of the risk inherent in a banks balance sheet structure at a particular point in time, most earnings simulation models evaluate risk exposure over a period of time, taking into account projected changes in balance sheet structures, pricing, and maturity relationships, and assumptions about new business. Banks often use simulation models to analyze alternative business decisions and to test the effect of those decisions on a banks risk profile before implementation. Banks also use simulation models in budgeting and profit planning processes. Construction of a Simulation Model Most simulation models are computer-based models that perform a series of calculations under a range of scenarios and assumptions. From data on the banks current position and managerial assumptions about future interest rate movements, customer behavior, and new business, a simulation model projects future cash flows, income, and expenses. These assumptions include different loan growth and funding plan scenarios and other assumptions about how a banks assets and liabilities will be replaced. The main components of a simulation model are presented in the table below. Data from a banks general ledger and transaction systems generally provide information on the banks current position for each
11D.571.3

al

portfolio in the models chart of accounts. This information is similar to that used for a gap report and includes current balances, rates, and repricing and maturity schedules. New business and reinvestment plans, which are generally more subjective, are based on managements assumptions. Those assumptions might be derived from historical trends, business plans, or econometrics models. Both market interest rates and business mix are forecasted. Forecasts of interest rates involve forecasts of their direction, the future shape of the yield curve, and the relationship between the various indices that the bank uses for pricing products.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

The banks potential exposure is estimated by calculating how a change in rates will affect the value, income, and expense of the banks current and forecasted financial positions. The output of a typical simulation model consists of: 1) future balance sheet and income statements under a number of interest rate and business-mix scenarios; 2) an analysis of the impact of the different scenarios on the value of the target account; and 3) graphical representations of the analysis that are often used to communicate results to senior management and the board. Measurement of Risk The greater the interest rate risk, the greater the change in the value of a targeted account under different interest rate scenarios. The target account is usually net interest income or net income. Many simulation models also are capable of measuring changes in the market value of equity. Several business mix and rate scenarios usually are run. Rate scenarios often include rising, flat, and declining rates, as well as a most probable scenario.

Do cu
11D.571.3

Table 3 illustrates the type of summary report that may be generated by an earnings simulation model. The report shows variation in net interest income under alternative interest rate scenarios using a flat rate scenario as a base. Similar reports are often developed to show how net interest income might vary with alternative business mixes and strategies.

ww Co w.p m dfw P iza D rd. F com Tr i


such as installment loans. shapes.
Copy Right: Rai University

A bank might have risk limits that restrict losses in the account at risk for a defined interest rate scenario over a certain period of time. For example, the bank in the table above might limit losses in annual net interest income from a 200 basis point change in rates to 10 percent of its base net interest income. Advantages of Simulation Models Simulation models allow some of the assumptions underlying gap reports to be amended. For instance, gap reports assume a one-time shift in interest rates. Simulation models can handle varying interest rate paths, including variations in the shape of the yield curve. Gap reports usually assume the improbable that all current assets and liabilities run off and are reinvested overnight. Simulation models can be more realistic. A simulation model can accommodate various business forecasts and allow flexibility in running sensitivity analyses. For instance, basis risk can be evaluated by varying the spreads between the indices the bank uses to price its products. Perhaps the strongest advantage of simulation models is that they can present risk in terms that are meaningful and clear to senior management and boards of directors. The results of simulation models present risk and reward under alternative rate scenarios in terms of net interest income, net income, and present value (economic value of equity). These terms are basic financial fundamentals that are readily understood by bank management. Simulation models can vary greatly in their complexity and accuracy. As the cost of computing technology has declined, simulation models have improved. Some simulation models can:
Handle the intermediate principal amortizations of products Handle caps and floors on adjustable rate loans and

prepayments of mortgages or mortgage-backed securities under various interest rate scenarios (embedded options).
Handle nonstandard swaps and futures contracts. Change spread relationships to capture basis risk. Model a variety of interest rate movements and yield curve Test for internal consistency among assumptions.

al
129

Analyze market or economic risk as well as risk to interest

income. Limitations of Simulation Models Although offering greater versatility than the alternatives, simulation is not always objective. A simulation can misrepresent the banks current risk position because it relies on managements assumptions about the banks future business. The myriad of assumptions that underlie most simulation models can make it difficult to determine how much a variable contributes to changes in the value of the target account. For this reason, many banks supplement their earnings simulation measures by isolating the risk inherent in the existing balance sheet using gap reports or measurements of risk to the economic value of equity. In measuring their earnings at risk, many bankers limit the evaluation of their risk exposures to the following two years because interest rate and business assumptions that project further are considered unreliable. As a result, banks that use simulation models with horizons of only one or two years do not fully capture their long-term exposure. A bank that uses a simulation model to measure the risk solely to near-term earnings should supplement its model with gap reports or economic value of equity models that measure the amount of long-term repricing exposures. Economic Value Sensitivity and Duration Models Techniques that measure economic value sensitivity can capture the interest rate risk of the banks business mix across the spectrum of maturities. Economic value sensitivity systems generally compute and measure changes in the present value of the banks assets, liabilities, and off-balance-sheet accounts under alternative interest rate scenarios.

Table 4 illustrates the type of output that is generated by economic value sensitivity models. In this example, the economic value of the banks equity would be adversely affected by a rise in interest rates. For example, if rates rose by 200 basis points, the present value of the banks assets would decline by $2.5 million, whereas the present value of the banks liabilities would decline by only $1.5 million. As a result, the banks net economic value would decline by $1 million from the base scenario.

Do cu
130

Construction of Economic Value Models Most economic value measurement systems are a form of simulation model. Typically, these models first estimate the current or base case present value of all of the banks assets, liabilities, and off-balance-sheet accounts. The model projects the amount and timing of the cash flows that are expected to be generated by the banks financial instruments under the base case interest rate scenario. These cash flows are then discounted by an appropriate discount factor to arrive at a net present value. For the base case scenario, the banks net economic value equals the present value of expected cash flows from the banks assets, minus the present value of expected cash flows from the banks liabilities, plus or minus the present value of expected cash inflows from the banks off-balancesheet positions.

To measure the sensitivity of the banks economic exposure to changes in interest rates, the model then performs similar calculations of expected discounted cash flows for alternative interest rate scenarios. The level and timing of cash flows for products with option features will often vary with each rate scenario being evaluated. For example, the rate of mortgage prepayments increases as interest rates decrease. Measurement of Risk For alternative scenarios, the change in net economic value from the base case represents the interest rate sensitivity of the banks net economic value. The greater the change in net economic value, the greater the potential risk exposure of the bank.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Duration Many economic sensitivity models also compute the duration of a banks financial instruments. Duration is a measure of the sensitivity of market values to small changes in interest rates. If interest rates increase, the market value of a fixed income instrument will decline. Duration indicates by how much. The duration of a fixed income instrument that has no option features is the percentage change in the market value of the instrument from a change in market rates. For instance, the market value of a bond with duration of five will decline by roughly 0.5 percent if interest rates increase by 10 basis points. Before advances in computing technology made simulations of net present values under multiple interest rate scenarios feasible, some bankers used duration as a proxy for estimating the net economic value of their institution. Duration is still used by many bank managers as a basis for evaluating the relative risks of different financial instruments, portfolios, or investment strategies. Duration incorporates an instruments remaining time to maturity, the level of interest rates, and intermediate cash flows. If a fixed income instrument has only one cash flow, as a zero coupon bond does, duration will equal the maturity of the instrument: a zero coupon bond with five years remaining to maturity has a duration of five years. If coupon payments are received before maturity, the duration of the bond declines, reflecting the fact that some cash is received before final maturity. For example, a fiveyear 10 percent coupon bond has duration of 4.2 years in a 10 percent interest rate environment. Duration is calculated by weighting the present value of an instruments cash flows by the time to receipt of those cash flows. Table 5 illustrates the calculation of the Macaulay and modified durations of a $100,000 two-year note that pays interest semiannually, has a 7.5 percent coupon, and was purchased at par to yield 7.5 percent. This note has a modified duration of 1.82. If rates were to increase 100 basis

al
11D.571.3

points, the value of this note would be expected to decline by approximately 1.82 percent.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Duration can measure the exposure of the economic value of a single contract or a portfolio of contracts carried at market value. The duration of a portfolio of contracts can be calculated by computing the weighted average maturity of all the cash flows in the portfolio individually. However, because the duration of individual instruments is usually readily available, most banks estimate the duration of a portfolio of contracts by weighting the durations of the individual contracts and summing them. Many banks use duration to measure and limit the risk of a portfolio of fixed income contracts. This measurement is much more precise than simply limiting the amount of securities with certain maturities a bank may hold. Duration also allows portfolio managers to combine the risks of different contracts based on their price sensitivity and to hedge the net risk of the portfolio.

The calculations in table 5 do not adjust the expected cash flows of the bond to changes in interest rates. Hence, this calculation (modified duration) is not valid for instruments, such as callable bonds and mortgage-backed securities, whose options will change their cash flows as interest rates move. To correct for this problem, many banks use what has become known as effective duration. Effective duration is derived by using simulation techniques to calculate the change in price of an instrument for a given change in interest rates. The concepts of effective duration and convexity are discussed in more detail in a later section. Properties of Duration In general, duration exhibits the following characteristics: instrument to changes in market interest rates.

The higher the duration, the greater the price sensitivity of the For two instruments with the same maturity, a high-coupon

instrument will have a lower duration than a low-coupon instrument and will also be less price sensitive. A larger proportion of a high coupons cash flows will be received sooner and thus the average time to receipt of the cash flows will be less.

Do cu
11D.571.3

A given fixed income instrument will have a higher duration in

a low interest rate environment than in a high interest rate environment.

Duration may be positive or negative. A fixed rate instrument

would have a positive duration, and an increase in interest rates would generally decrease the market value of the instrument. Mortgage servicing rights and interest-only (IO) mortgagebacked securities generally have a negative duration, since an increase in interest rates would decrease the prepayment speed of the underlying mortgages, increasing the market value of the instruments. contract. For example, if a portfolio consists of two bonds of equal market value, one with a duration of six and the other with a duration of two, the duration of the portfolio would be four.

Durations are additive when weighted by the amount of the

Duration Can Measure the Exposure of a Portfolio of Instruments

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Table 6 illustrates how duration may be used to calculate the interest rate risk of a portfolio of fixed income contracts.

The weighted duration of the portfolio is 4.04. If interest rates were to increase by 1 percent, the market value of the portfolio would decline by about approximately 4.04 percent or $11,629. Duration Can Measure the Economic Value of Equity Some banks use duration to measure or hedge the sensitivity of the economic value of their portfolio equity to changes in interest rates. The duration of equity is derived from the duration of all assets, liabilities, and off-balance sheet contracts. To understand how the duration of equity measures risk, the economic value of portfolio equity may be viewed as a net bond position. Assets are analogous to long bond positions with positive durations, and liabilities are analogous to short bond positions with negative durations. Duration indicates whether the economic value of the net bond position or portfolio equity will increase or decrease with a change in rates. A bank with long-term assets funded by short-term liabilities will generally have a duration of equity that is positive. The economic value of portfolio equity of this bank will decline as interest rates rise. A bank with short-term assets funded with long-term liabilities will generally have a negative duration of equity. The economic value of this bank will increase as interest rates rise. The higher the duration of a banks equity (whether the number is positive or negative), the more sensitive is its economic value to changes in rates. Advantages of Duration Duration is a useful tool for setting risk limits either on the net economic value of the bank or for selected portfolios, such as investment portfolios. Some banks attempt to limit their economic exposures through simple position limits, which are
131

al

usually based on maturity. Such limits, however, do not precisely assess the sensitivity of market values to changes in rates, something limits based on duration can do. Limits based on duration analysis are best expressed in terms of dollar changes in market or economic value. Duration measures the percentage change in value rather than the actual dollar change. To calculate exposure of the account at risk (the economic value of equity), a bank must weight the durations of assets, liabilities, and off-balance-sheet accounts by their economic values. Limitations of Duration Duration as a measure of the sensitivity of economic value also has limitations:
Macaulay and modified duration accurately measure changes in

the cash flows of this security will increase because prepayments will slow. The present value at 8 percent (PV +) is $94. Then the bank estimates the present value at 6 percent (PV -), taking into account the decrease in cash flows because the rate of prepayment is higher. The present value at 6 percent (PV -) is $104. The bonds effective duration [($104 - $94 / 2 (100) - (.01)] is 5. In other words, the bonds value will decline by approximately 5 percent for the 100-basis-point increase in interest rates. Monte Carlo Simulation Monte Carlo simulation measures the probable outcomes of events, such as a movement in interest rates that have a random or stochastic element. The simulation models discussed previously measure the value of the bank under a limited number of interest rate scenarios. Such approaches are deterministic because the possible interest rate paths are predetermined and controlled by the model user. Although deterministic models are valuable, their outcomes depend on the interest rate scenarios. If actual interest rates differ from assumptions, the risk to the bank may be substantially different from the measured risk. The outcome of a Monte Carlo simulation is less preordained than that of a deterministic simulation because its statistical modeling technique generates thousands of randomly determined interest rate paths. These interest rate paths result in a distribution of possible interest rate scenarios. The value of the bank or the banks portfolios is then evaluated for each of the possible interest rate paths, yielding a range of possible values or outcomes. Construction of a Monte Carlo Simulation Formulating the average Monte Carlo model is quite complex: 1. The first step is to develop the underlying probability distribution for interest rates that will generate the random interest rate paths. Typically, the current forward yield curve is used to anchor the probability distribution. 2. A model generates a multitude of random interest rate paths (typically several thousand). However, certain properties are usually built into this process to ensure that the mean (average) interest rate generated is consistent with the current structure of interest rates and that the dispersion (distribution) of possible interest rates is consistent with observed volatility. These properties are important to ensure that the model does not introduce the possibility of risk-free arbitrage. Essentially, the properties assume that markets efficiently and fairly price securities, such that one cannot construct instruments with equivalent risk and higher returns than what the market commands. 3. The cash flows corresponding to each of the randomly developed interest rate paths are calculated. That is, the bank specifies the relationships between the interest rates and the cash flows of the banks portfolios. For example, the bank would develop a prepayment function that relates mortgage prepayments with each interest rate path. Once adjusted for prepayments and other interest-rate effects, the cash flows are said to be option-adjusted. 4. The option-adjusted cash flows for each rate path are discounted by the risk-free rate to obtain their net present value. All of
11D.571.3

The duration of different instruments will change at different

rates as time passes (duration drift). In other words, in a portfolio hedged for duration the effectiveness of the hedge will diminish over time.
Macaulay and modified duration assume that the expected cash

flows of a fixed income instrument will not change with interest rate movement. Hence, these duration measures are not accurate for instruments with embedded options, which often grow more sensitive to interest rates as rates rise. In other words, an instrument that declines in value by 1 percent for a 100-basis-point increase in interest rates might decline by 3 percent for a 200-basis-point increase and by 6 percent for a 300-basis-point increase. Convexity and Effective Duration Banks can adjust modified duration to overcome some of the problems of convexity. Effective duration incorporates changes in cash flow that occur in instruments with options. (Convexity reflects a nonlinear shift in the price/yield relationships of instruments with and without options.) However, effective duration is useful only for a specific interest rate change. To obtain an instruments effective duration, calculate its present values at two different market yields and obtain the percentage change in price (PV + and PV -). Divide the absolute difference between the two present values by the bonds original (basecase) market price (PV) times the assumed change in yield (y) times two:

Do cu
132

The resulting number is the instruments effective duration. For example, a bank can calculate the effective duration of a Government National Mortgage Association security after a 100basis-point rise in interest rates. Assume the security is currently trading at par to yield 7 percent. The bank first estimates the present value of the security if interest rates increase to 8 percent. In calculating this present value the bank takes into account that

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

value for small and generally parallel changes in interest rates. However, modified duration can not measure changes in value for nonparallel changes in interest rates, and there is no practical method by which effective duration can measure nonparallel shifts. The margin of error, which increases with the size of the interest rate change, is called convexity.

al

Advantages of Monte Carlo Simulation Monte Carlo simulation is a powerful risk analysis tool because it alone, of the tools discussed in this booklet, can accurately and clearly adjust risk estimates for optionality and convexity. The capital markets employ Monte Carlo techniques to price interest rate derivative products and residential mortgage products using OAS analysis. Banks can employ Monte Carlo techniques to understand and evaluate current market pricing as well as their economic value at risk. Limitations of Monte Carlo Simulation

Do cu

Monte Carlo simulations, like all interest rate risk measurement systems, are only as good as the data and assumptions underlying the analysis. Two critical assumptions in Monte Carlo analysis are the process used to derive the interest rate paths and the cash flow relationships developed for each interest rate path. If these assumptions are faulty, the results of the simulation will be suspect. Monte Carlo simulations are complicated to develop and require substantial computing technology. To correctly derive and apply this modeling process, a bank must have staff members with considerable expertise in financial and statistical theory. Model Exposure Regardless of the type of model used, banks should take care to minimize model exposure. Financial models fall into error for many reasons. Users may make incorrect assumptions about deposit behavior or about changes in the spread between interest rates. They may select a model that is not appropriate for all parameters. A model that provides reasonable results for a certain range of inputs may fail to do so for extreme assumptions. Some model users misuse good models; for

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Notes:
Copy Right: Rai University

5. After obtaining the base-case price in step 4, the current forward yield curve is shocked for each of the interest rate scenarios that banks consider in their risk analysis. For example, if the bank is evaluating its risk for a parallel 200-basis-point increase in rates, it would shift the underlying distribution of interest rates (developed in step 1) by 200 basis points such that the expected mean (average) is 200 basis points higher across the maturity spectrum. Steps 2,3, and 4 are then repeated, except that the market price that results represents the price that would result if interest rates were to change as assumed for that rate scenario. The resulting estimates are used to fill in a report such as the one illustrated in table 4.

One can estimate how much, in percent, convexity can change the price of an option-free instrument:

Table 7 calculates the convexity of the 7.5 percent coupon bond shown in table 5 and how much, in percent, this convexity will change the bonds price. The total change in price, in percent, of the 7.5 coupon bond after a 100- basis point move can now be estimated by summing the changes caused by modified duration and convexity. (For optionfree bonds, convexity will always have a positive effect on price, and duration will have a negative effect.) Thus, the 7.5 coupon bond is estimated to decline by 1.80 percent (duration of minus 1.82 percent plus convexity of 0.02 percent) after a 100- basispoint increase in rates. If rates decrease by 100 basis points, the bond is estimated to increase in price by 1.84 percent (duration of 1.82 percent plus convexity of 0.02 percent).

al

these outcomes are summed, and the total is divided by the total number of rate paths evaluated to produce an expected net present value for the distribution. If the cash flows have been adjusted correctly and the interest rate paths correctly reflect market expectations about the distribution of possible future interest rates outcomes, this expected net present value represents the base-case market price. If the models assumptions are accurate, the cash flows have been adjusted for all risks, and the market for the instrument under consideration operates according to the underlying theory (which assumes risk neutral valuation), this base-case price should be within a few basis points of observable market prices. If the net present value does not match the market price, common practice is to add a fixed spread known as the option-adjusted spread (OAS) to the risk-free rate.

example, they evaluate an insufficient number of paths, in the process sacrificing accuracy for the sake of speed. When designers fail to provide adequate documentation, they increase the possibility that future changes to the model will result in errors. Technical Note: Calculating Convexity The convexity of an option-free fixed income instrument is measured by the following formula:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

133

We will define credit risk for investment management firms as the risk that credit-related events will cause investments to underperform benchmarks or realize losses. Traditionally, investment managers have managed credit risk by establishing and complying with credit concentration guidelines usually expressed in nominal terms. These measures, however, fail to take into account the complexities of measurement across portfolios and measurement of credit exposures other than in security positions. Traditional measures also fail to support the active management of credit risk taken in a portfolio because they do not provide return analyses for credit risks taken. Active credit risk portfolio management techniques allow investment managers to measure returns gained for credit risks taken and to fine-tune portfolios to match credit risk appetites and optimize risk and return. Use of these measures of return/risk trade-off result in maximizing the efficiency of risk-taking.

Do cu
134

Management and measurement of credit risk pose challenges even greater than those required for market risk. Investment management firms must define the credit exposure of different types of positions and establish measures that allow for aggregation across positions and across multiple time horizons. Firms must also obtain information about counterparties in addition to information about positions. They need to evaluate credit mitigation methods as part of the overall credit risk management process and incorporate the effects of credit mitigation into credit risk measures. Finally, firms need to develop portfolio-level measures of credit performance to account for diversification effects and to allow for the active management of credit risks taken. Current best practice in credit risk measurement consists of expected and unexpected loss measures and portfolio management measures, which require current and potential exposure calculations. Each of these types of measures serves a different purpose in credit risk management. Each also represents a different objective and level of sophistication in the credit risk management process, as further discussed below. Current and Potential Exposure Calculations

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 24: MEASURING CREDIT RISK


A first step in the measurement of credit risk is to determine the exposure to counterparties in each portfolio and across portfolios (firm-wide exposure). These exposures may exist in securities or derivatives positions, or they may arise from pending settlements of contracts, positions executed with a financial institution, or letters of credit issued by an institution for another counterparty. The purpose of ex posure calculation is to support the assessment of portfolio and firm compliance with policies and guidelines and to assess credit concentrations. For security positions, exposure is straightforward (i.e., equal to the value of the security). For derivatives positions and security transactions that have not yet settled, exposure measurement must consider not only the value of the position today, but also the potential change in value of the position over its life (or until security settlement) that could lead to a larger exposure on the position. This maximum potential exposure estimates the degree of credit risk in the position and is analogous to the credit exposure on the value of a security position. This exposure also allows for aggregation across all portfolio positions and is necessary for the determination of expected and unexpected losses. Data requirements for credit exposure calculations go well beyond those required for market risk measures. In order to measure the current and potential exposure of portfolio positions, investment managers need appropriate valuation methodologies to determine the current value of positions. In addition, they need the ability to: Aggregate positions by counterparty: To properly aggregate exposures, the investment manager must first identify all of its counterparties, then link each of its positions to the appropriate counterparty, and link each counterparty to the appropriate parent obligor. As part of this process, the investment manager must specify the level at which a counterparty will be defined. For example, a firm may define counterparties at a parent or subsidiary level, or both. Generally, the firm should have the ability to identify where exposures may be linked through corporate ownership. Simulate future market values: Firms must have the ability to model estimated changes in the value of a financial instrument position over its life. Generally, this is done by randomly drawing a large number of market moves and calculating the value of the position under each market move, generating a distribution of potential market values. Potential credit exposure on the position is then defined as an extreme (99 percent confidence interval, for example) value of the positions creating credit exposure to the firm. Net exposure where netting agreements allowed: After counterparties are defined, any netting and collateral agreements with those counterparties should be noted and exposure for those positions covered by netting and/or collateral agreements calculated accordingly. Consequently, systems used to measure

Investment management firms face a constant challenge in maximizing returns. As markets become more liquid and arbitrage opportunities decrease, portfolio managers must seek additional sources of risk to enhance portfolio returns. One way in which managers can do this is to increase credit risk in security positions. A second approach is to use derivatives, which, for over-the-counter derivatives, also results in additional credit risk. Both of these methods allow portfolio managers to take on measurable risk that may be compared to returns to determine the efficiency of the portfolio strategy. With the intensified pressure to deliver riskadjusted performance, the ability of investment management firms to measure and monitor the credit risk in security and derivatives positions has taken on increased importance.

al

11D.571.3

exposure should have the ability to identify where netting may be performed and then net appropriately. Incorporate collateral held to mitigate exposure: Systems should also have the ability to incorporate any collateral held into the exposure calculation. Expected and Unexpected Loss Measurement Once current and potential credit exposures are calculated, expected and unexpected losses may be determined using estimated default probabilities and recovery rates. Expected credit losses are defined as the mean of the credit loss distribution based on the distribution of credit exposures, the default probability of the counterparty, and the expected recovery rate if a counterparty were to default. Unexpected credit losses are defined as an extreme (e.g., 99 percent confidence interval) level of loss derived from the credit loss distribution determined. Expected credit loss calculations are used to determine expected net returns to the portfolio, and unexpected credit losses are used to determine extreme potential credit losses to the portfolio, similar to the market risk losses estimated by value at risk.

monitored. Monitoring many different credit guidelines presents both financial and compliance risk, and places a huge reliance on systems to accurately reflect the many investments in the portfolios. Finally, the complexity and computational intensity of credit risk measures place increased reliance on systems as well. Credit Derivatives Market Markets in credit risk transfer have the potential to contribute to a more efficient allocation of credit risk in the economy. They could enable banks to reduce concentrations of exposure and diversify risk beyond their customer base. Liquid markets could also provide valuable price information, helping banks to price loans and other credit exposures. They might allow institutions other than banks to take on more credit risk, so that the immediate relationship banks have with end-borrowers need not mean they are excessively exposed to them. A number of primary and secondary markets in debt instruments bearing credit risk are well established. Investment grade and, increasingly in North America and Europe, sub-investment grade borrowers are able to issue debt securities directly through international and domestic bond markets. Bank loans to companies are distributed through initial syndication and can be sold through the secondary loan market, including to non-banks. The development of securitisation techniques has allowed banks to sell portfolios of all kinds of loans (eg mortgage, credit card, automobile) provided investors can be shown that the aggregate cashflows behave in a reasonably predictable manner. All of these markets, however, require the taker of credit risk to provide funding, either directly to the borrower or to the bank selling the debt, in order to buy an underlying claim on the borrower. Credit derivatives differ because credit risk is transferred without the funding obligation. The taker of credit risk provides funds ex post only if a credit event occurs. Credit derivatives therefore allow banks to manage credit risk separately from funding. They are an example of the way modern financial markets unbundle financial claims into their constituent elements (credit, interest rate, funding etc), allowing them to be traded in standardized wholesale markets and rebundled into new composite products that better meet the needs of investors. In the case of credit derivatives, the standardized wholesale market is in single-name credit default swaps and the new composite products include portfolio default swaps, basket default swaps, synthetic collateralized debt obligations (CDOs) and credit-linked notes.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Expected and unexpected loss measurement requires even more data about the counterpartyspecifically default probabilities and recovery rates. Historic default probabilities are available from public rating agencies, such as Standard & Poors and Moodys. Default probabilities may also be determined using vendor methodologies, such as KMVs Credit Monitor, the RiskMetrics Groups Credit Manager, Credit Suisse First Bostons Credit Risk +, and Moodys RiskCalc. Where default probability methodologies rely on historic data, questions about applicability exist due to the small number of data points available. Other methodologies that do not rely on historic default data rely on certain assumptions about firms and markets that must be considered. Historic recovery rates on senior securities are available from Standard & Poors and Moodys. Portfolio Management Measures In the context of portfolio measurement, credit risk management has advanced from an exposure limitation and loss avoidance process to a process of active management and fine-tuning of credit risks taken in the portfolio. Active portfolio management results in not only measuring and limiting credit risks taken but in optimizing the return gained for a given level of credit risk. Portfolio management concepts related to market risk are fundamental and well developed within the investment management industry. Credit risk portfolio measures, though not as well developed in the industry, are simply an extension of this. For example, a portfolio manager may use measures of credit risk-adjusted return on capital to optimize the overall asset mix. A manager may also use portfolio weighted-average risk grades to monitor the overall credit quality of a portfolio.

Do cu
11D.571.3

Other Issues to be considered Measurement and monitoring of credit risk by investment managers pose specific issues not faced by other types of institutions. For example, investment management firms must measure and monitor credit risk at both the firm and portfolio level. This is complicated by the fact that each portfolio may have specific credit guidelines against which it must be

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Hedging with Credit derivatives instruments

al

135

Do cu
136

Portfolio transactions Just as CDS can be used to unbundle credit risk, they can also be combined to create new portfolio instruments with risk and return characteristics designed to meet the demands of particular protection buyers and sellers. This use of CDS to construct portfolio instruments is part of the evolution of the market in collateralised debt obligations (CDOs). In its simplest form, a CDO is a debt security issued by a special purpose vehicle (SPV) and backed by a diversified loan or bond portfolio (see Diagram 2). The diversification of the portfolio distinguishes CDO transactions from asset-backed securitisation (ABS) of homogenous pools of assets such as mortgages or credit card receivables, a more established technique. The economics of CDOs is that the aggregate cashflows on a diversified portfolio have a lower variance than the cashflows on each individual credit; the lower risk enabling CDOs to be issued at a lower average yield. Because these are structured deals, they do not have standardised features in the

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

A CDS is similar, in economic substance, to a guarantee or credit insurance policy, to the extent that the protection seller receives a fee ex ante for agreeing to compensate the protection buyer ex post , but provides no funding. Being a derivative, however, makes a CDS different. Both guarantees and credit insurance are designed to compensate a particular protection buyer for its losses if a credit event occurs. The contract depends on both the state of the world (has a credit event occurred or not?) and the outcome for the buyer (has it suffered losses or not?). A CDS, by contrast, is statedependent but outcome-independent. Cashflows are triggered by defined credit events regardless of the exposures or actions of the protection buyer. For this reason, credit derivatives can be traded on standardized terms amongst any counterparties2. The single name CDS market allows a protection buyer to strip out the credit risk from what may be a variety of different exposures to a company or country loans, bonds, trade credit, counterparty exposures etc and transfer it using a single, standardized commodity instrument. Equally, market participants can buy or sell positions for reasons of speculation, arbitrage or hedging even if they have no direct exposure to the reference entity. For example, it is straightforward to go short of credit risk by buying protection using CDS. Standardization, in turn, facilitates hedging and allows intermediaries to make markets by buying and selling protection, running a matched book.

The original CDO structure involved the transfer of the underlying bonds or loans to an SPV, which then issued CDOs backed by the cashflows on this portfolio. Most CDOs are still funded transactions of this type. Increasingly, however, CDSs are used to transfer the credit risk to the SPV leading to so-called synthetic CDOs. Alternatively, the protection buyer enters into a portfolio CDS a CDS referenced to a portfolio of companies or sovereigns rather than a single name directly with the seller, or embeds a portfolio CDS in a so-called credit-linked note (CLN) issued directly to the seller, avoiding the use of an SPV altogether. These variants are summarised in the table below:

Entering into a portfolio default swap directly with the protection buyer is the simplest of these structures. But it exposes both parties to potential counterparty risk and, if the protection buyer is a bank, it will only obtain a lower regulatory capital requirement if the protection seller is also a bank (see Box 2). A CLN protects the buyer against counterparty risk on the seller but not vice versa. It can be an attractive option if the protection buyer (issuer) is, for example, a highly rated bank and the seller (investor) is a pension or mutual fund, with funds to invest. Some investors may also have regulatory or contractual restrictions on their use of derivatives but not purchases of securities such as CLNs. CLNs, however, still involve the protection seller taking counterparty risk on the buyer4. Partly for this reason, most CDOs continue to involve an SPV. In a typical synthetic structure, the SPV issues CDOs to the end-sellers of protection and invests the proceeds in high-quality collateral securities, such as G7 government bonds, bonds issued by government-sponsored agencies, mortgage bonds (Pf and brief) or highly-rated assetbacked securities (see Diagram 3). The end-sellers receive the return on the collateral, often swapped into a floating rate, together with the premium on the default swap. Principal and/or interest payments are reduced if credit events occur on the reference portfolio. In this case, the bank/sponsor has a claim on the SPV under the CDS, backed by the collateral, which is typically cashsettled. This structure has advantages for the protection buyer and the end-sellers:
It reduces counterparty credit risk for both parties. Both have

potential claims on the SPV that are at least partly backed by the
11D.571.3

al

There is no universally accepted definition of a credit derivative. The focus in this article is on single-name credit default swaps and the structured portfolio transactions put together using them. Single-name credit default swaps In a credit default swap (CDS), one counterparty (known as the protection seller) agrees to compensate another counterparty (the protection buyer) if a particular company or sovereign (the reference entity) experiences one of a number of defined events (credit events) that indicate it is unable or may be unable to service its debts (see Diagram 1). The protection seller is paid a fee or premium, typically expressed as an annualized percentage of the notional value of the transaction in basis points and paid quarterly over the life of the transaction. Box 1 describes single name CDS in more detail.

same way as a single-name CDS. But transactions can be distinguished according to three characteristics.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

1. Whether protection is funded or unfunded and sold directly or via an SPV?

collateral securities. The SPV should be remote from the bankruptcy of either party.
The CDOs can be structured so that they are high yielding but

the principal is protected by the value of the collateral securities (principal-protected notes). Some insurance companies find this type investment attractive (see below).
If a bank has bought protection against its loanbook, some

regulators may allow a lower regulatory capital requirement on the underlying loans if the counterparty is an SPV that is restricted to holding OECD government bonds. 1. How the risk and return on the portfolio is tranched to give different protection sellers obligations with varying degrees of leverage? The risk on portfolio transactions is usually divided into at least three tranches. For example, a US$100 million portfolio may have US$10 million first loss, US$20 million mezzanine and US$70 million senior pieces. If there is a US$15 million loss on the portfolio following a series of credit events, the seller of protection on the first loss tranche loses US$10 million and the seller on the mezzanine US$5 million. In effect, the holder of the first loss (or equity) tranche has leveraged the credit risk on the underlying portfolio by ten times whereas the holder of the senior piece may have a much lower risk security. Typical market practice at present is to tranche the risk so that the senior position is Aaa/AAA-rated and the mezzanine position Baa2/BBB-rated.

(see below) are said to be important sellers of protection on supersenior tranches, often via back-to-back transactions with another bank or securities firm in order to obtain a reduced capital requirement for the bank protection buyer5. Super-senior tranches are intended to be almost free of credit risk they rank higher than senior tranches, which are often AAA-rated. Annual premia are correspondingly low, ranging between 6-12 basis points, depending on market conditions. But the notional value of the exposures can be very large. For example, super-senior tranches on large diversified portfolios of investment grade credits may cover the last 90% of losses on transactions of US$ billions in size.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

Tranching can be achieved in different ways depending on the structure of the transaction. If the risk on the entire portfolio is transferred to an SPV (whether through sales of the underlying asset or a series of CDSs), it can issue securities with varying degrees of seniority. If, however, protection is purchased directly from sellers, tranching must be included within the contractual terms of the portfolio CDS or credit-linked note.

More senior tranches of CDOs are more likely, in practice, to be unfunded than first loss or mezzanine tranches. This is partly because the amounts involved are larger and partly because protection buyers prefer to avoid counterparty risk on equity and mezzanine tranches because of the greater likelihood that these tranches will bear losses. Recently, a hybrid structure has been popular with European banks. It involves an SPV selling protection to a bank on the mezzanine/senior tranche of risk on a portfolio against issuance of tranched CDOs. The bank separately buys protection directly on a so-called super-senior tranche using a portfolio CDS. This might specify, for example, that the protection seller will compensate the buyer if credit events on the reference portfolio lead to losses in excess of 20% of the portfolio value over the life of the transaction (Diagram 4). Monoline insurers

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Basket default swaps allow protection sellers to take leverage in a slightly different way. A first-to-default basket is a CDS that is triggered if any reference entity within a defined group experiences a credit event. Typically the transaction is settled through physical delivery of obligations of the entity that experienced the credit event. For example, an investor might enter into a US$100 million first-to-default basket on five European telecoms, receiving a spread significantly higher than that for a single-name CDS on any one of the names in the basket; although less than selling US$100 million protection on each company individually because the exposure is capped at US$100 million. The more risk averse can sell protection on second or even third-to default baskets, which are triggered only if a credit event occurs on more than one name in the basket over the life of the transaction. 3 The nature of the reference portfolio Commercial banks can use the CDO structure to transfer the credit risk on loans that they have originated. These are known as collateralised loan obligations (CLOs) or sometimes balance sheet ransactions because the primary motivation is to remove risk from the balance sheet of the commercial bank. For example, it may want to reduce particular concentrations in its loanbook or to lower its regulatory capital requirements or to free up lines to counterparties. CLOs are generally large transactions often billions of dollars. Reference portfolios are usually loans to large, rated companies but recent transactions have included loans to mid-sized companies. Growth of CLOs began in 1997, following JP Morgans BISTRO programme. Another use of the structure is by fund managers to gain leverage for high-yield, managed investment portfolios. Such transactions known as collateralised bond obligations (CBOs) or sometimes arbitrage CDOs are much more common in the US, where sub-investment grade bond and secondary loan markets are more developed, than in Europe. Typically, an investment bank will

al
137

Market size The credit derivative market has been growing rapidly but is probably still small relative to other OTC derivative and securities markets. Comprehensive, global data do not exist. The best sources are the British Bankers Associations 2000 survey6 of its members and the quarterly statistics on outstanding derivatives positions of US commercial banks and trust companies published by the Office of the Comptroller of the Currency (OCC)7. The BBA survey suggests that the global credit derivatives market increased in size (measured by notional amount outstanding) from around US$151 billion in 1997 to US$514 billion in 1999, with the market expected to continue growing over 2001 and 2002. Market participants estimate that the market continues to double in size each year. The OCC data show that US commercial banks and trust companies had notional credit derivatives outstanding world-wide of US$352 billion at end-March 2001. Based on market participants estimates of their market share compared to securities dealers and European banks, this is consistent with an overall market size of around US$1 trillion. According to the BBA survey, around half the market was in single name CDS (Chart 1). Another source of data on portfolio transactions is the volume of transactions rated globally by the major agencies. Moodys rated 38 CBOs in 2000, of which 12 were synthetic, and 51 CLOs, of which 32 were synthetic. The value of CBOs was around US$48 billion and of CLOs US$72 billion, suggesting that around US$50 billion of portfolio default swaps were agreed in 20009. By contrast, data from the Bank for International Settlements (BIS)10 show the largest derivatives

Do cu
138

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

A third use of CDOs also known as arbitrage transactions is to repackage static portfolios of illiquid or high yielding securities purchased in the secondary market. Examples of securities that have been repackaged in this way include asset-backed securities, mortgage-backed securities, high-yield corporate bonds, EME bonds, bank preferred shares and even existing CDOs. Intermediaries have also used CDS to create entirely synthetic tranches of exposure to reference portfolios (see below). For example, an intermediary might buy protection from a customer using a portfolio CDS designed to replicate the mezzanine tranche of a CDO referenced to a portfolio of European companies. It then hedges its position in the single name CDS market.

(US$16 trillion); and equities (nearly US$2 trillion). According to the OCC data, credit derivative exposures comprised less than 1% of US commercial banks and trust companies notional derivative exposures at end-March 2001. Although notional principal is only a loose guide, these figures suggest that using derivatives to trade credit risk remains small relative to their use to trade interest rate, foreign exchange and equity risk. The notional value of credit exposure being transferred through the market is also only a fraction of the debt held by US and European banks and by bondholders in the international and US domestic bond markets. Because one or more transactions with intermediaries will often occur between an initial protection buyer and a final protection seller, the figure of US$1 trillion is an upper bound on the actual value of exposure being transferred through the market. For comparison, the value of non-government debt outstanding in the international bond market was nearly US$5 trillion and in the US domestic bond market US$61/2 trillion at end-December 2000; and bank balance sheets totalled around US$5 trillion for US banks and 12 trillion for euro area banks at end-December 200011. Market participants say that about 500 to 1000 corporate names are traded actively in the single-name CDS market, although trades have occurred on up to 2000 names. Most of these companies are rated by the major agencies. Markets in single name CDS on sovereigns are typically more liquid than companies, but only about 10-12 sovereigns are traded mostly emerging market economies with less frequent trades in some G7 sovereigns such as Italy and Japan. The BBA survey found that 20% of reference entities were sovereigns and 80% companies. Market participants suggest that the proportion of emerging market sovereign trades was higher in 1997-98 at the time of the Asian crisis. Demand to buy protection on sovereigns is often from banks or other investors willing to extend credit to borrowers in a particular country but not to increase their country exposure beyond a certain limit known as line buying. The BBA survey reveals that in 1999 just under half of global trading was taking place in London. New York accounted for about the same proportion, with the remainder trading of local names in regional centres, principally Tokyo and Sydney. Market Participants A stylized structure of the credit derivatives market includes end-buyers of protection, seeking to hedge credit risk taken in
11D.571.3

al

find investors willing to purchase mezzanine and senior tranches and the fund manager (known as the collateral manager) will retain a share of the first loss risk and so the equity. Whereas CLOs are not actively managed portfolios are typically static other than the replacement of maturing loans with others of similar characteristics collateral managers are permitted to trade managed CBO portfolios in order to maximise yield for the equity investors. The exception is if the CBO breaches defined covenants such as interest cover or ratings requirements. In this case, any excess return on the portfolio is redirected from the equity holders to pay down the higher ranking tranches in order of seniority. CBO tranches are more likely to be fully funded than CLOs because the collateral manager typically needs cash to invest. But collateral managers are nonetheless often permitted to buy and sell protection using CDS as part of a CBO portfolio.

markets in terms of notional principal were those related to interest rates (US$65 trillion); foreign exchange rates

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

concentrated among a number of large intermediaries mainly US and European wholesale banks and securities houses. And the market appears to be facilitating a net transfer of credit risk from the banking sector to insurance companies and investment funds, mostly through portfolio transactions. What motivates these different groups of market participants? Commercial banks Compared to loan sales and securitisation, credit derivatives can be an attractive way for commercial banks to transfer credit risk because they do not require the loan to be sold unless and until a credit event occurs. This makes it easier to preserve the relationship with the borrower and is simpler administratively, especially in some European countries where loan transfers are complex, although the borrowers consent may still be needed to transfer the loan if physical settlement is agreed following a credit event. Use of credit derivatives also allows a bank to manage credit risk separately from decisions about funding. Securitisation can be an expensive source of funds for banks with large retail deposit bases, although market participants say that buying protection using CDS is often more expensive than selling loans in the secondary market, perhaps reflecting concerns about moral hazard (see below). Lending to customers is typically one of a bundle of banking services including deposit taking and liquidity management, access to payment

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

other parts of their business; end-sellers of protection, usually looking to diversify an existing portfolio; and, in the middle, intermediaries, which provide liquidity to end-users of CDS, trade for their own account and put together and manage structured portfolio products. The BBA survey gives some idea of which institutions fall into these three categories (Chart 2). By far the biggest players are the intermediaries, including investment banking arms of commercial banks and securities houses and therefore split between these two categories in Chart 2. They are thought to run a relatively matched book but are probably, in aggregate, net buyers. OCC data show that this is the case for the large US banks (Chart 3). End-sellers include commercial banks, insurance companies, collateral managers of CBOs, pension funds and mutual funds. End-buyers are mainly commercial banks but also hedge funds and, to a lesser extent, non-financial companies.Participants suggest that the market has continued to grow and develop rapidly since the BBA survey. It is difficult to draw any firm conclusions yet about how it will work in a steady state. At present, however, the single name CDS market appears to be relatively

In spite of these potential advantages, the OCC data for US banks show that only the largest appear to use credit derivatives on any scale at present. In the data, it is impossible to separate the activities of commercial banks as intermediaries from their purchases of protection to hedge risk on their loanbooks. For example, the notional credit derivatives exposures of JP MorganChase, an important intermediary, comprised 64% (around US$227 billion) of the aggregate for all 400 US banks and trust companies at endMarch 2001. But outside JP MorganChase, Citibank and Bank of America, the notional exposures of the remaining 396 US banks that use derivatives was only US$18.4 billion. This suggests that regional US banks are making only modest use of credit derivatives, whether purchasing protection on their loanbooks or selling protection to diversify their credit portfolios. It may be that the European banks are more significant end-buyers of protection. For example, 29 of the 51 CLOs and 21 of the 32 synthetic CLOs rated by Moodys in 2000 involved European banking portfolios. The total value of risk transferred was US$48 billion, of which 90% was through credit default swaps. An important motivation for banks has been regulatory. The 8% Basel minimum regulatory capital requirement on corporate exposures is higher than the economic capital requirement on many investment grade exposures, giving banks an incentive to transfer the risk to entities not subject to the same regime. This may help to explain why most CLOs to date have referenced portfolios of loans to companies of relatively high credit quality. The proposals to reform the Basel Accord announced in January 2001 may have important consequences for the market (see Box 2). The intention is that, by aligning capital requirements more closely with economic risk, the proposals will reduce the purely regulatory motive for portfolio transactions so that transfers of high quality corporate loans might decrease. But, importantly, the Basel Committee on Banking Supervision decided that credit risk modelling has not progressed far enough to recognise default correlations in setting bank capital requirements. Banks may still therefore have an incentive to transfer the risk on portfolios to protection sellers able to adjust their capital requirements to reflect greater diversification. Non-financial companies Judging from the Banks regular contacts with UK companies and market intermediaries, corporate involvement in the credit derivatives market remains limited to a handful of large multinationals. Intermediaries do, however, see potential for a

al

systems and other ancillary services such as foreign exchange and derivatives. The use of credit derivatives is part of a wider trend among some of the largest banks to separate out these services so that they can be priced appropriately. Any credit risk is, in principle, valued according to its marginal contribution to the risk and return on the banks overall credit portfolio. If the credit risk does not fit with the portfolio, any additional cost of selling the debt or purchasing protection using credit derivatives must be recouped from the banks other business with the customer. Banks may also purchase credit derivatives, alongside purchases of loans and bonds in the secondary markets, to manage their portfolio actively. For example, they might sell protection where they can bear the risk at a lower cost than the market price because it diversifies their portfolio across industry sectors or regions in which they do not have many customers.

139

number of applications as the market matures. For example, companies could use CDS to buy protection against credit extended to customers or suppliers an example might be the extension of so-called vendor finance to telecom operators by telecom equipment manufacturers, where CDS might usefully be used to reduce the size and/or concentration of the resulting credit exposures. Insurance companies Insurance companies are net sellers of protection and their participation in the market seems to be increasing. An insurance company can sell protection both through investment in securities such as CDOs or credit-linked notes on the asset side of its balance sheet and, on the liabilities side of its balance sheet, by entering into single-name or portfolio default swaps, writing credit insurance or providing guarantees. The greater prominence of insurers is clearly an important explanation for the increasing volume of portfolio transactions. Many insurance companies have regulatory or legal restrictions on their ability to enter into derivatives contracts. But most life and general insurance companies can invest in credit-linked notes and CDOs alongside equities, bonds and other asset classes. EU insurance companies, in particular, are said to have been significant investors in CDO tranches in order to gain greater exposure to the US high yield market as part of the diversification of their portfolios since European Monetary Union. These are often structured as principal-protected notes in order to meet the requirements of some insurance regulators to treat them as bonds rather than equities for capital adequacy purposes. For example, contacts say that German insurance companies have been major investors in principal-protected equity and mezzanine tranches of CDOs. Some insurance companies are said to have begun by investing in senior tranches of CDOs and then added higher yielding mezzanine tranches as they became more familiar with the asset class.

Do cu
140

Significant participation on the liabilities side of the balance sheet appears currently limited to a relatively small number of large, international property and casualty insurers and reinsurers, together with specialists such as monolines and Bermudan reinsurers. US insurance regulators agreed in 2000 to treat transactions using derivatives that replicate the cashflows on a security, such as a corporate bond, in the same way as the replicated asset. The agreement has been implemented in a number of states, including New York, where insurance companies have been allowed to hold up to 10% of their investments in replicated assets since January 2001. This may give US insurance companies greater scope to sell protection using credit derivatives. But some property and casualty and reinsurance companies clearly have entered the market on a relatively large scale since 1998/9. Their motivations are said to have included low premiums in their traditional property and casualty businesses, apparent opportunities because they are not subject to the same regulatory capital requirements as banks and the possibility that credit risk might further diversify portfolios. Portfolio default swaps and baskets are potentially attractive to these insurers because they are based on diversified portfolios and offer the potential for differing degrees of leverage depending on the tranche held. Some have gone beyond portfolio transactions and sought to put together a

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Insurance companies also provide financial guarantees on the senior tranches of CDOs, a practice which is long established in the asset-backed and US municipal bond markets. Such credit wrappers are used to improve the rating of the tranche (credit enhancement) in order to meet the needs of investors. They typically provide an unconditional and irrevocable guarantee that principal and interest payments will be made on the original due dates. But they do not provide cover for accelerated payment following default. A few AAA-rated insurers, known as monolines because they specialise in credit insurance, dominate the market, although some of the major property and casualty insurers have also begun to offer such policies. Monolines are also said to be the largest sellers of protection on super-senior tranches of CLOs. Annual accounts suggest that they, in turn, reinsure around 15-25% of their exposures. Pension/investment funds and hedge funds Similarly to insurance companies, pension and investment funds are also important investors in CDO tranches and creditlinked notes. The nature of the fund tends to determine the seniority of the investment. For example, leveraged debt funds might buy higher-risk, mezzanine tranches whereas senior tranches might be sold to pension funds. A few hedge funds are also said to specialise in investing in the first loss and mezzanine tranches of CDOs. But hedge fund participation in credit markets appears to remain relatively small compared to, for example, equity markets. In particular, hedge funds are thought to be little involved in arbitraging CDS, loan and bond markets. Hedge funds are, however, active users of single-name CDS in order to hedge other trades. Probably the most significant example is convertible bond arbitrage, where hedge funds use CDS to hedge the credit risk on the issuer of the bond. Traders say that CDS premia can spike upwards if a company issues convertible bonds, as funds seek to buy protection. They can, it is suggested, be relatively insensitive to the cost of hedging the credit risk, as their goal is to isolate the embedded equity option. Over the past year, hedge funds have become large end-buyers of protection on some entities that have issued convertible bonds, typically lowerrated US companies. A particular category of investment fund manager is the collateral managers of CBO funds. Typically they invest in the first loss,

al

portfolio of single-name default swaps. A few are active traders and intermediaries. More typically, insurance companies are looking to put together a large and relatively static book of portfolio and perhaps single-name positions, using credit modelling and/or actuarial techniques to price the risk. Until recently, non-banks have found it difficult to put together such portfolios because they have been limited to acquiring (on the asset side of their balance sheets) bonds that companies decide to issue. Credit derivatives, in effect, reduce the transaction costs for non-banks of constructing a diversified credit book. Some large insurers appear to have focussed on super-senior or senior tranches, making use of their high credit ratings. Other companies, such as the Bermudan-based reinsurers, have reportedly been sellers of protection on mezzanine tranches of CDOs, baskets and on single names.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

11D.571.3

equity tranches of the CBOs that they manage. The track record of the collateral manager is said to be a key consideration in attracting protection sellers for the mezzanine and senior tranches. Intermediaries Most of the large global investment banks and securities houses have developed the capacity to buy and sell protection in the single name CDS market in order to provide liquidity to customers and trade for their own account. Many are bringing together their CDS and corporate bond trading desks with a view to encouraging traders to identify arbitrage opportunities between the two markets. This parallels moves to integrate, to a greater or lesser degree, government bond, swap and repo desks during the 1990s. Intermediaries also use CDSs to manage credit risk in their other activities. In particular, they buy protection against counterparty risk arising in other OTC derivative transactions, such as interest rate swaps (line buying). In this context, CDSs are now established as an alternative to collateralisation. For example, an intermediary may prefer to buy protection from a third party than request collateral from a counterparty if it is a valuable corporate customer. The first collateralised debt obligation with credit events linked to payments by counterparties on a portfolio of OTC transactions was issued at the end of 2000. One role of the intermediaries is to bridge the different needs of protection sellers and buyers. An example is the legal or regulatory restriction in a number of countries against insurance companies using derivatives (except to hedge insurance business), so that these insurers cannot sell protection directly using ISDA documentation. They can, however, sell insurance to other insurance companies against their credit exposures on nearly identical terms. Some intermediaries have therefore established captive insurance companies (known as transformers) in financial centres such as Bermuda that do allow insurers to enter into derivatives. The transformers typically sell protection to banks using CDS and simultaneously purchase back-to-back protection from insurers under insurance policies (Diagram 5).

Do cu

Another, probably more significant, function of intermediaries is the bundling of single credits to create portfolios. As explained earlier, demand by insurance companies to sell protection on portfolios and investment funds to purchase CDOs and creditlinked notes has increased recently. It is apparently outstripping the supply from commercial banks looking to buy protection on their loanbooks. Intermediaries have responded by putting together synthetic CDOs and portfolio default swaps in which the sellers/investors specify the mix of credits that they want to hold. Moodys rated thirteen such synthetic transactions in 2000

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

Pricing, liquidity and relationship with other credit markets A single-name CDS is similar to an option exercisable if a credit event occurs. The pay-off is the notional value of the CDS less the market value of the reference entitys debt following the credit event. Although the inclusion of credit events other than default complicates pricing somewhat, the key variables are the expected probability that the reference entity will default over the life of the CDS, the expected recovery rate on the debt and the required return on any economic or regulatory capital held by the protection seller against the risk of unexpected losses on the transaction. In this sense, pricing single name CDS is little different to pricing loans or bonds. Most would be settled physically, so that the protection seller steps into the shoes of the protection buyer following a credit event. In principle, therefore, the premium on a CDS should be similar to the credit spread on the reference entitys debt trading at par or, more precisely, the spread over LIBOR if the fixed return on that debt is exchanged for a floating rate return in an asset swap. An important characteristic of the market is that counterparty exposures on outstanding CDSs could increase sharply if credit quality within the corporate sector were to deteriorate and large numbers of companies were to move close to default. The development of the CDS market is bringing closer together different credit markets that have previously been segmented. For example, contacts say that in 1998 loans to the Republic of Turkey were priced about 150 basis points above LIBOR, bonds were about 500 basis points over LIBOR, political insurance cost 300 basis points, and CDS were priced at 550 basis points. Prices on these different instruments are unlikely to converge completely. For example, loans may contain covenants and clausing that allow lenders to take pre-emptive action to protect their positions more easily than bondholders; or banks may under-price loans in order to develop a relationship with the borrower in pursuit of other ancillary business. Both factors may mean loans still trade at lower credit spreads than bonds. But CDS have the potential to encourage arbitrage and increase transparency for three reasons:
CDS offer a relatively pure exposure to credit risk, which, in

principle, makes them an attractive instrument to hedge credit risk embedded in other instruments; and may make their prices

al

but seventeen in Q1 2001 alone. Traders say that demand from banks and securities houses to sell protection in order to hedge portfolio default swaps was one explanation for the general downward trend in premia in the single name CDS market in Q1 2001. Intermediaries might still be left net short of credit risk ie protection bought exceeds protection sold. But it is possible that they will welcome this position as an offset to the inventory of corporate bonds that they typically carry from their primary and secondary market activities. It might also be a natural hedge to the pro-cyclicality of investment banking revenues for example, IPO and M&A activity tends to fall off during economic slowdowns when credit risk typically crystallises. A greater concern would be if an investment bank was unexpectedly net long of credit risk: for example, if it had constructed the hedges for a CDO before placing the transaction. Because of this balance of risks, portfolio transactions are typically only hedged after completion.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

141

a benchmark against which those of other credit instruments can be compared.


Although the CDS market remains smaller than the bond and

Illiquidity in the term reverse repo (or stock borrowing) markets

The CDS market may also have greater liquidity for those looking to take a short position in a particular credit. In the bond market this means selling the bond short and borrowing it through reverse repo or stock borrowing. Especially in Europe, liquidity in the term stock borrowing (or repo) market for corporate bonds can be unpredictable, partly because not all holders are willing or able to lend securities. Taking a short position by buying protection using CDS can be more straightforward. Market participants say that the CDS market has had greater two-way liquidity than the bond market in some recent cases when a companys creditworthiness deteriorated sharply, such as Xerox and Pacific Gas and Electric. Certainly market participants have been sufficiently confident in market liquidity that they have used CDS to take views on changes in creditworthiness, expecting to be able to close out the position and realise any mark-to-market profit by entering into an opposite trade in the future. A typical trade might be to take a view on the shape of the term structure of credit spreads. For example, a speculator may believe that the forward credit spreads implied by current premia on term CDS are too high or low. Such trading increases market liquidity for those buying protection to hedge credit exposures or selling protection as part of an investment portfolio.

Do cu
142

In practice, market prices for CDS can be lower than, close to or higher than credit spreads on corporate bonds (the so-called default-cash basis), both across different reference entities and for the same entity over time. Market participants say explanations for changes in this relationship include:
Copy Right: Rai University 11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Notes:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

loan markets, it is more standardised. CDS trading is concentrated at certain maturities, principally five years, whereas bonds and loans have different maturities and coupons. This may make it easier for intermediaries to hedge CDS positions and encourage tighter bid: offer spreads, and so foster liquidity.
Liquidity in the CDS market is less constrained by whether the

for corporate bonds can mean CDS premia move higher relative to credit spreads on bonds if demand to buy protection increases. This reflects the cost of taking a short position in bonds in order to arbitrage the two markets. Box 3 shows that this seemed to happen in the telecom sector in the second half of 2000.
Some market participants (eg insurance companies or hedge

reference entity decides to issue debt or whether existing debt holders are prepared to sell or lend securities although these are needed for physical settlement following a credit event. Market structure and liquidity A number of large intermediaries publish indicative two-way CDS prices for the most-traded companies and sovereigns on their websites and on electronic data vendor screens. Trading in the inter-dealer market occurs through voice and internet-based brokers. Services exist to provide reference prices for markingto-market existing transactions, based on averages of prices supplied by dealers and/or on trade prices in the inter-dealer market. Traders say that liquidity in the single-name CDS market varies, with different entities and sectors having more activity at different times. In general, activity is said to increase when assessments of creditworthiness are changing, as banks look to hedge their risks and traders take positions. For example, telecoms reportedly became more liquid during 2000 H2. The corporate bond market is typically more liquid if a borrower has large, recent bond issues but CDS may be if the company is an infrequent issuer and/or long-term investors hold most of its debt.

CDS may expose protection sellers to a little more risk than

bondholders if they believe there is value in the option for the protection buyer to deliver various obligations of the reference entity following a restructuring. They may therefore require CDS premia to be a little higher.

Compared to bondholders, protection sellers under CDS may

require a premium because they have no contractual rights, such as covenants or information requirements, vis--vis the reference entity allowing them to monitor its creditworthiness or influence its decision-making. Protection sellers under CDS may be subject to different marginal tax rates than bondholders.
Compared with bondholders, participants in the CDS market

may require different liquidity premia against the cost of trading out of positions.

al

funds) may not always have ready access to financing and prefer to take credit risk though an unfunded CDS than by purchasing a bond. Financing a bond position exposes the investor to some liquidity risk if its source of funding becomes more expensive or dries up. Demand to sell protection by such investors may reduce CDS premia relative to credit spreads on bonds.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 25: INTEGRATING MARKET AND CREDIT RISK MANAGEMENT


The Integration of Market and Credit Risk Measurement
The measurement and aggregation of risk continue to play an increasingly important role in the overall risk management process. Whether it be from senior management, the board, or regulators, information on the risk that firms are taking through their trading activity is in demand. As the measurement of market risk using Value-at-Risk (VaR) has become a standard tool, stakeholders have begun to look towards conquering the measurement of the next step along the risk continuum in trading activities, credit risk. The pricing and measurement of this risk component have taken on increased importance with the rapid spread of credit derivatives. As recent market events have shown, however, there is an important interplay between market and credit risk. The ability to measure market and credit risk for trading activities together in an integrated model allows for a more complete picture of the underlying risk. the specification of a stochastic process for firm value. In these models, default occurs when the value of the firm reaches an exogenously specified barrier. In both models, when the firm reaches default, the credit exposure is impacted by the recovery rate. In this framework, we will be working with the second of these methodologies, the firm value model. While there are advantages and disadvantages of each method, the firm value model allows for a development of an integrated model that is linked not only through correlation but also through the impact of common stochastic variables, as will be seen. The Integrated Market and Credit Risk Framework The integration of market and credit risk measurement requires the ability to interweave the concept of VaR with a credit risk model. Overall, our goal is the same as that for a VaR measure: estimate the distribution of portfolio values at the end of a holding period. In the integrated framework, however, the distribution is driven by both market risk and credit risk. Using the firm value model, the credit quality of a particular firm will be measured by the value of the firm relative to the bankruptcy barrier. In this integrated framework, we separate the simulation of the underlying stochastic processes over the holding period and the valuation of the portfolio at the end of holding period. This bifurcation minimizes the computational burden of the methodology while maintaining the flexibility of valuation across a wide range of products. The underlying credit valuation model follows that of Joon, Ramaswamy and Sundaresan (1993). For the sake of brevity, we will only specify the stochastic processes. For a complete specification of the underlying assumptions in the model, the reader is referred to Joon (1993). The following processes are assumed:

Market Risk Measurement through VaR VaR measures the expected loss for a portfolio over a given holding period for a specified confidence level. There are a number of methodologies available for the measurement of VaR. In all cases, however, there is a requirement for the estimation of a distribution of portfolio returns at the end of the holding period. In some cases, this distribution is assumed to be normal which may allow for analytical solutions to be developed. The distribution may also be estimated using historical returns. Finally, a Monte-Carlo simulation can be used to create a distribution based on the assumption of certain stochastic processes for the underlying variables. In each method, based on the specified confidence level, a particular point on the distribution is taken as the VaR measure. For the measurement of market risk, the choice of methodology is often dependent on the characteristics of the underlying portfolio as well as other factors. For example, factors that may be considered include the degree of leptokurtosis in the underlying asset returns distribution, the availability of historical data or the desire to specify a more complicated stochastic process for the underlying assets. In our framework, we will focus on the use of Monte-Carlo simulation. While this is the most computationally intensive of the available methodologies, it allows the most flexibility for the specification of an integrated market and credit model.

Do cu
11D.571.3

Credit Risk Measurement using the Value Process As new models for the measurement of credit risk are developed, they generally fall into two categories. The first category includes models that specify an underlying process for the default process. In these models, firms are assumed to randomly move from one credit rating to another with specified probabilities. One of the potential states that a firm could transition to is default. The second category of models requires

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

where Vi is the value of firm I, r is the short rate, is the volatility of the value process for firm I, is the volatility of the short rate, k is the speed of mean reversion, q is the long term mean-reversion rate and b is the payout rate of the firm. Zi and Z1 are standard Brownian motions with

Note that, for simplicity, a Vasicek process has been used. The model can be specified with any model for the short rate. This allows the use a model that can be calibrated to the term structure of interest rates such as a Hull-White model. Also note that the stochastic short rate r is found in the drift for the value process. This allows for the interplay of market and credit risk beyond correlation effects.

al

143

All that is left is to utilize the stochastic process to generate a Monte-Carlo simulation over the holding period and then to solve the PDE at the end of the holding period to provide a valuation that includes the credit risk of the contingent claim. An Example The implementation of the model was done in MATLAB. The PDE was numerically solved using an Alternating Direction Implicit (ADI) Finite Difference Scheme. The use of an ADI scheme was chosen since it maintains tridiagonal matrices which allow for efficient computation using sparse matrices. Using MATLABs functionality to handle sparse matrices, maximizes the inherent matrix nature of the problem. The Monte-Carlo simulation was implemented in MATLAB based on above specified stochastic process. Before calculating the distribution for the risk calculation, it helps to investigate the valuation component to gain insight into what factors will impact the integrated market and credit risk measure. As an example, a 3-year bond with a 5% coupon was chosen. Later, when the simulation is run, the holding period will be set to one year leaving a bond with 2-years to maturity at that time. The initial short rate is set to 5%, the mean reversion rate is set to 0.1 and the long-term mean rate is set to 6%, and the initial firm value is 200. The interest rate volatility is set at 2.5% and the firms volatility is set to 15%.

Do cu
144

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

With these processes, the following PDE can be written for the valuation of a contingent claim, P.

For the bankruptcy barrier, V*, the following boundary condition is applied when V = V*:

The estimation of V* is a critical issue. In this framework, we will continue to follow Joon(1993) who assume that default occurs when the firm is unable to make coupon payments. Assuming that the firm has only a single issue of debt, V* is then specified as follows:

Figure 1 - Bond Value Sensitivity to Issuer Credit Quality and Interest Rates As a first analysis, the valuation model was run across a range of short rates and V*. The results are shown in figure 1. The graph shows the sensitivity of the bond value to the credit quality of the issuer. The bond value falls quickly towards zero as V* increases indicating a higher likelihood of default. On the interest rate side, the bond shows the expected sensitivity to interest rates. Interestingly, the bond value seems be more sensitive to interest rates when the credit quality is lowest. This confirms our intuition that firms with low credit quality should have more market risk than firms with high credit quality. Firms with higher credit quality are those which we expect will have only a base level of interest rate exposure.

Figure 2 - Bond Value Sensitivity to Firm Value Volatility and Net Cash Payout Rate As second level of analysis, the bond valuation was performed for a range of firm net cash outflow payments and firm value volatilities. Noting that V* changes as beta, the payout rate, changes, we find the expected results. Joon (1993) note that firms with higher net cash flows are more likely to meet their coupon obligations and should, therefore, have lower credit yield spreads, as is highlighted in this valuation analysis. As shown in figure 2, when the payout rate is low, the bond valuation is highly sensitive

al
11D.571.3

where d(t) is the recovery rate, C is the coupon and B(r,t;c) is the value of an comparable Treasury. Additional boundary conditions will be applied based on the contingent claim being valued. This emphasizes another advantage of this framework which is the ability to incorporate a wide range of products by simply changing the boundary conditions.

to the firm value volatility. However, as the payout rate increases, and the firm is more likely to make coupon payments, the volatility of the firm value process has less of an effect on the bond valuation. Overall, this analysis confirms our intuition with regard to lowgrade bonds and their sensitivity to underlying firm and market factors. Finally, we run the simulation to obtain an integrated market and credit risk distributions. The distributions are shown below. As can be seen, the lower credit-quality bond has an integrated distribution very different from the stand-alone market distribution of the same bond. On the other hand, the highgrade bond distribution highlights the marginal effect of the credit risk component. Results such as these highlight the value of an integrated market and credit risk. Although the credit quality of the bond has a clear impact on the risk measure, it impacts the overall risk of higher grade bonds by a significantly lesser amount.

derivative. The methodology provides a flexible way to develop a risk measure that integrates market and credit risk. Notes:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

Figure 3: Market and Credit Risk Distributions The framework developed can be extended to multiple products across multiple issuers and counterparties. In addition, the ADI scheme can be extended to three dimensions to allows the inclusion of products where two different underlying firm values processes will impact the contingent claim valuation, as in the case of a credit

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 145

al

The New Capital Accord, (Basel II), published by the Basel Committee of the Bank of International Settlements (BIS), defines Operational Risk as: The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk. By its very definition then, Operational Risk extends to all activities, business functions and organizational units of a firm. Any Risk Management activity must constitute four steps viz: 1. Risk Identification 2. Risk Measurement 3. Risk Control

4. Integrated Risk-Return Management

Unlike market and credit risk the measurement and management of operational risk is not a theoretically straightforward exercise. This very disparate and wide scope of Operational Risk poses ssignificant conceptual issues exist in all the steps of the risk management process. These are:

Do cu
146

1. Definition: At the very basic level, differences still exist on the definition of Operational Risk. Definitions have ranged from extremely narrow ones focussing only on transaction processing risks, to sweepingly broad definitions that include all risks faced by a Bank/FI other than Credit and Market Risk. Both these extremes are clearly of little use. The BIS definition attempts to evolve a broad consensus.

2. Isolation: Operational Risk, Market Risk and Credit Risk are not mutually exclusive. Often, one risk event has repercussions on all three risk domains. However, quantification requires isolating the risk effects of each event. While convenient thumb rules exist for demarcating credit and market risk, operational risk poses specific problems. 3. Risk Factors: Unlike Credit and Market risk, the risk factors that contribute to Operational Risk are often subject to internal control by a bank. Furthermore, the causal linkage between a risk factor and the associated loss severity & likelihood is often difficult to establish. For instance, the linkage between market yields and the value of a bond portfolio are direct and measurable. However, Operational

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 26: OPERATIONAL RISK


Introduction
Nature & Scope of Operational Risk The need to manage and mitigate Credit Risk and Market Risk has long been recognised by Banks and Financial Institutions. In fact efficient management of financial risk can give a Bank / FI a decisive competitive edge over rivals. With scams like the collapse of the Barings Bank, the LTCM crisis, the P&G Bankers Trust litigation, etc., the finance industry became painfully aware of the significant threat posed by operational risk. 9/11 and its aftermath has only served to highlight the importance of operational risk management. Risk factors, like a people risk event raised by a rogue trader cannot be traced to a directly quantifiable exposure and / or probability of loss. 4. Quantification: There are two schools of thought concerning quantification of Operational Risk. One believes that Operational Risk is an inherently subjective domain and quantification would rely too much on human judgement to be reliable. The other school believes that quantitative measurement presents our only hope of managing operational risk and hence advocates building up of loss event databases to provide the requisite statistical foundations for quantification. However the fact that Operational Risk events do not represent any notional amounts, contract values or payoffs, makes quantification of exposure particularly tricky. 5. Modelling: Market and Credit Risk rest on solid academic foundations. Modern Portfolio Theory and the Efficient Markets Hypothesis provide the conceptual framework within which stochastic calculus and analytical methods provide elegant mathematical models for credit and market risks. These models can work on several decades of historical data for credit risk and near continuous pricing data on exchange traded instruments for market risks, to generate meaningful results. Operational Risk on the other hand lacks such academic underpinnings as yet. The quantification issue, mentioned above, has bearing on the nature of operational risk modelling. While statistical models like Operational VaR, loss distributions, etc., can be used, validating and fine tuning the assumptions becomes difficult owing to the lack of data. Given these issues, a pure quantitative approach to Operational Risk may be inadequate. Industry consensus thus seems to be evolving towards using both qualitative and quantitative means. Combining the two into an integrated Operational Risk view presents further modelling challenges. 6. Controlling: Risks are typically controlled by reducing exposures or hedging positions. Operational Risks cannot be controlled by these means. Some Operational Risks require process modifications that would minimize / eliminate a particular kind of loss event. Some can be insured against. Some have to be lived with. Determining which of these remedies is suitable in a given circumstance may seem straightforward, but quantifying the effectiveness of these mitigation strategies poses further challenges. 7. Capital Allocation: The single largest innovation in the New Capital Accord is that it requires Banks and FIs to allocate capital against risk of Operational Losses. 8. Risk Aggregation: In the case of market and credit risk, aggregation, while practically complex, is conceptually clear aggregated Credit Risk is essentially the risk to a Credit Portfolio. The absence of an Operational Portfolio relates to the absence of an explicitly quantified Operational Exposure. However

al

11D.571.3

there is general agreement that intuitively, operational loss events are correlated. Classification of Operational Risks To facilitate assessment of Operational Risks, they are often categorized into the following broad risk classes. While several such categorizations have been proposed, the classification suggested by Hoffman is placed below. 1. People: These consist of events like employee fraud, human error, etc 2. Relationship: These pertain to risks arising out of client interfaces, viz., product liability, pricing negotiation, contract default, etc 3. Technology & Process: These pertain to process failures or transaction errors. E.g., Downtime for a trading system or errors in settlement processing.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Under the AMA, the regulatory capital requirement will equal the risk measure generated by the banks internal operational risk measurement system using the quantitative and qualitative criteria specified in the Capital Accord. Use of AMA is subject to supervisory approval by the central bank. Banks adopting the AMA will be required to calculate their capital requirement using the Advanced measurement Approach as well as as per the existing Accord for a year prior to implementation of the New Accord at year-end 2006. The accord encourages banks to move along the spectrum of available approaches as they develop more sophisticated operational risk measurement systems and practices. Qualitative & Quantitative Approaches The operational risk associated with any event has two components, loss severity and loss probability. Loss, in itself consists of expected and unexpected components. The unexpected loss component could be severe or catastrophic.

4. Physical Asset: These errors pertain to physical loss or damage owing to fire, theft, etc 5. Other External : These pertain to risks arising out of activities of, or failures at external agencies. E.g., failure of a bank payment gateway, human error at counterparty, etc. Separating risks into classes facilitates loss event identification and is a very important tool for qualitative assessment techniques discussed below. In addition, it is also useful to think of Operational Risk events in terms of their locale in an organisation; giving rise to the following process dimensions for Operational Risk. 2. Execution & performance: These pertain to risks of failure while executing orders, processing transactions, or fulfilling contractual obligations.

1. Origination: These are risks of failure at transaction origination

Do cu
11D.571.3

3. Managing Business lines: These risks arise from organisational management activities at he business unite level. E.g., technology implementation, work-force management risks, etc. 4. Corporate level : These are enterprise level risks. Damages due to natural causes or terrorism, loss owing to improper security and access controls in the organisation, etc. ] Basel II Requirement for Operational Risk Requirement for banks to allocate regulatory capital for Operational Risk is a significant innovation in the New Capital Accord, or Basel II. The accord has outlined three methods for calculating operational risk capital charges in a continuum of increasing sophistication and risk sensitivity, namely: Basic Indicator Approach; Banks using the Basic Indicator Approach must hold capital for operational risk equal to a fixed percentage of average annual gross income over the previous three years.

Advanced Measurement Approaches (AMA).

ww Co w.p m dfw P iza D rd. F com Tr i


iv. Mitigate associated risks
Copy Right: Rai University

Usually, expected losses are adjusted for in pricing or in reserve allocation. Unexpected losses require capital allocation. Given that operational risk events are most often subject to internal control, any Operational Risk system that passively measures Operational Risk would clearly be inadequate. Once risk factors are identified as likely causes of Operational Risk losses, mitigating steps need to be initiated. While quantification would indicate risk magnitude and capital charges, it may not by itself suggest mitigating steps. This makes it advisable for banks to combine qualitative and quantitative approaches to Operational Risk. The broad steps involved here would be: i. Determine the types of operational losses that could occur ii. Identify the causal risk factors iii. Estimate the size and likelihood of losses Approaches for measuring Operational Risk can be seen along two dimensions. 1. Qualitative versus Quantitative approaches: Whether the approach relies on quantitative methods like Bayesian networks or actuarial analysis or on qualitative techniques like audits and self-assessments 2. Bottom-up versus Top-down analysis: Depending on whether firm-wide risk is aggregated from unit level risks, or whether unit level risks are drilled-down to from firm-wide risks.

al

147

Qualitative Approaches Qualitative approaches involve audits, self-assesments and expert / collective judgement. Critical Self-Assessment: (CSA) This is one of the common qualitative bottom-up approaches where line managers are asked to critically analyse their business processes given specific scenarios to identify potential risks and gaps in their risk management processes. Tools like questionnaires, checklists and workshops are used to help the managers analyse the risk profile of their business units. The key idea behind this method is that business managers are in the best position identify and manage the Operational Risks pertaining to their business units. Risk Audit Employing the services of external (or internal) auditors to review the business processes of a business unit is another approach. This process not only helps identify risks but also helps put in place the oversight organisation for Operational Risk.

given time interval, with loss severity distributions, which describe the severity of the loss. The loss distribution can then be used to quantify Expected Operational Loss and a suitable quantile (95% or 99% confidence interval) can be used to arrive at an operational VaR. Techniques like Monte Carlo simulation can be used to fill up gaps in the loss distribution occurring on account of lack of data. Furthermore, with developments in extreme value theories, more accurate estimations of the tail distributions can be made. The operational VaR would help quantify unexpected loss for which capital can be allocated. Causal / Factor based Approaches Bayesian Network is one of the commonest causal approaches used to quantify Operational Risk. In this approach, process workflows are mapped to a probability tree where each node represents a loss event or indicator and has an associated probability. Once all the initial nodes of the probability tree are assigned probabilities, the probability of all subsequent nodes can be calculated and the Bayesian Network is complete. Monte Carlo simulations can be run on the network starting with the initial variables to arrive at the required loss distributions. This analysis can allow highly granular risk events / indicators that are specific to a business process to be used. Modelling Operational Risks The case for quantification A number of approaches have developed for modelling operational risk. Niall OBrien, Barry Smith and Morton Allen outline the options and discuss how they relate to risk management objectives such as performance measurement and capital allocation The time to quantify has arrived. Following the publication of the Basle Committees report on operational risk in September 1998 and last months consultative paper on a new capital adequacy framework, the industry is on notice that it has to step up its thinking about how to measure and not simply manage operational risk. The benefits of measuring and streamlining the flow of capital, people and information into and out of the enterprise were realised long ago outside the financial sector, as evidenced by the management programmes Total Quality Management, Six Sigma and Shareholder Value Added introduced at companies like US conglomerate General Electric and communications firm Motorola. With the search for value by customers and shareholders, deregulation and global competition transforming the financial services industry, there should be no need to wait for the Basle Committee to claim its operational risk claw-back before acting. Of course, quantifying operational risk is a challenge, if it means supporting enterprise-wide performance measurement and capital allocation. But this is not the only possible objective. Simpler models, delivering relative or subjective measures risk indicators, ratings, or impact measures are widely used already. These are intended to
improve the quality of workflow; reduce losses caused by process failure; change risk culture; and

Key Risk Indicators (KRI) The KRI approach tries to blend the qualitative and quantitative aspects of Operational Risk management. The focus in identifying KRIs is on predictive rather than causal factors. Factors that have predictive value and that can be easily measured with minimum time lag can serve as risk indicators. Some risk indicators inherently carry risk related information, for instance, indicators like transaction volumes, trade size, portfolio size, etc. Others are indirect indicators, for instance, IT budgets, transaction lifecycle durations, appraisal completion rates, etc. Key indicators are identified from several potential factors and are tracked over time. The predictive capabilities of the indicators are tested through regression analysis on historical loss data and indicator measurements. Based on such analysis, the set of indicators being tracked may be modified suitably. Over time, as the model gets refined, the set of indicators can provide early warning signals for operational losses.

Do cu
148

Quantitative Approaches

Loss Scenario Modelling This approach attempts to translate a qualitative operational risk assessment into a risk quantification mechanism. They make use of methods similar to the Critical Self-Assessment approach in that business managers analyse business processes to identify potential risks arising out of various loss scenarios. Quantification is achieved by assessing the loss frequency (number of losses expected over a fixed period) and the loss severity, which are combined to arrive at a loss distribution.

Since this approach requires the active involvement of business managers, it leads to proactive risk identification. However, the quantification arrived at is highly subjective and the correlations between the loss scenarios are difficult to gauge, posing problems for risk aggregation. Loss Distribution / Actuarial Approaches This quantification approach, widely used by the insurance industry, uses loss distributions derived from historical. Loss distributions are derived by combining loss frequency distribution, which describes the frequency of losses over a

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

provide early warning of deterioration in systems or

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

management. Most of the major publicised losses at financial institutions of the past few years were due wholly or partly to operational risk, for example, the losses sustained last year by investors in hedge fund Long-Term Capital Management, and by Sumitomo, Daiwa, NatWest and Barings. In such cases it seems far-fetched to suppose that quantification and risk capital attribution alone can help. For such low-frequency, high-impact events as rogue trader syndrome, internal data will probably never be statistically valid. The current tendency among modellers is to look outside the enterprise and fit external data from comparable organisations. By this means, one could attempt to set aside enough risk capital to ride out the rogue trader event, if it were to occur. But then the business lines returns and competitiveness in the market are tied to that of the organisations used for benchmarking, leaving no incentive for investment in the kind of management controls that might have prevented or tempered the event in the first place. The lesson is that statistical models based on external data must be leavened with some form of internal marking-to-operations.

Operational risk can be divided into operational leverage risk (also known as business or strategic risk) and operational failure risk. Operational leverage risk is the risk that the organisations operations will not generate the expected returns as a result of external factors, such as changes in the tax regime, in the political, regulatory or legal environment, or in the nature or behaviour of the competition. Modelling this kind of risk is best carried out using scenario analysis. Operational failure risk is the risk that losses will be sustained, or earnings foregone, as a result of failures in processes, information systems or people. In contrast to leverage risk, the risk factors in failure risk are primarily internal. Modelling losses In the sections that follow, we show how to combine distributional assumptions for event frequency and severity to derive loss estimates, using the familiar example of transaction processing errors. Although it would be possible to model total transaction handling losses as a single distribution, it is preferable to combine separate distributions for the mishandling event process and the severity. This has a number of advantages. These include better drill-down into the causes and effects of losses, and the improved ability to set trigger thresholds for implementing dynamic control processes as part of the workflow and to see the effects of those controls.

Do cu
11D.571.3

The event process for transaction handling errors is best approximated as a Poisson process, in which the frequency of error events per unit of time is distributed as a Poisson variable (although in theory, the exponential distribution could also be used to model the distribution of the time between errors). In general, Mondays and Fridays have a higher proportion of mishandled transactions than other days (see figure 1). The number of transaction mishandling events on different days of the week follow different poisson processes with respective parameters. For example, on Mondays the number of mishandled transactions are distributed as P, on Tuesdays as P and so on.

ww Co w.p m dfw P iza D rd. F com Tr i


where 0<x, 0<, and 0<.
Copy Right: Rai University

Having fitted the daily mishandling event data to a distribution, it is possible, using maximum likelihood analysis, to derive a consistent set of critical event count thresholds for each day of the week. Based on the same confidence intervals applied to the daily distributions, these can provide dynamic triggering of rules, such as alarms or manual drill-down. For modelling the continuous variable describing the severity of transaction mishandling events (for example, penalty payments in the case of settlement failure), the usual choice is the Weibull distribution (see figure 2).

The shape of the Weibull distribution is governed by its parameters and , and its probability density function is given by:

In order to model the total losses, a mixture of the poisson and Weibull distributions must be formulated. The mixture distribution is formulated as a compound poisson process. The total loss due to mishandled transactions, or severity amount, S(t), for some time interval (0,t) forms a compound poisson process if: 1. The frequency of mishandled transaction events forms a poisson process; 2. The individual loss amounts are independent, and identically distributed; and

al

149

3. The individual loss amounts are independent of the number of events N(t). If the mishandled transaction events occur in accordance with a poisson process with rate and the moment generating function (MGF) of the individual loss amounts (random variable x) is Mx(u), then the MGF of S(t), the mixture distribution is: It can be shown that E(S(t))=tm1 and V(s(t))=tm2 where m1 is the mean of the Weibull distribution and m2 its variance. Thus, we can calculate the mean and variances of the total losses due to mishandling. The final step is to take the overall loss distribution and use it to attribute risk capital to the overall transaction workflow. The distribution must be scaled from the daily total severity distribution to the appropriate horizon and confidence interval dictated by the firms capital allocation policy. Simulation can be used to aggregate loss distributions across multiple operational risk categories. A similar methodology can be applied to address the important area of model risk. Model risk is a function of input data quality and inherent model applicability and accuracy. The valuation model risk resulting from data quality problems should be considered. The lifecycle of a transaction may be characterised in terms of canonical events and the types of data quality problems typically associated with those events, including errors in market data; failure to capture initially all relevant trade attributes; and failure to capture lifecycle events such as resets, changes in collateral values, dividends or corporate actions correctly.

some time to come, the greatest gains will be found in instrumenting and improving the organisations core workflows. The ups and downs of operational risk models Fully-fledged models of operational failure risk fall into two broad categories: top-down and bottom-up. Top-down models integrate loss or earnings volatility data at the business unit or enterprise-wide level, independently of the actual workflow, to arrive at an implied estimate of the risk in the business unit as a whole. These models are easier to implement than bottom-up models, but are not inherently sensitive to the actual business process implementations. One top-down approach is to use the Capital Asset Pricing Model (CAPM) to benchmark against comparable institutions. Model inputs include equity prices, betas, debt leverage and benchmark equity price movements due to major operational failures. CAPMbased models attempt to strip away the components of the firms specific risk that are due to balance sheet leverage and portfolio risk, and provide an overview of the firms operational risk capital. The CAPM methodology is easy to implement, but inadequate by itself because it is tied to an enterprise-wide view. It cannot help with capital allocation or improvement in business processes. Focusing directly on operational loss data at business unit level leads to a simple top-down model that addresses this issue. Risk is expressed as the absolute value of the observed volatility of current budgeted (expected) expenses due to operational failures. This simple approach includes costs such as penalties for settlement failure, but ignores indirect effects on revenue such as foregone income. A more sophisticated model takes overall earnings volatility and attempts to strip away the components due to market and credit risk. Bottom-up models involve mapping the workflows in which failure may occur. In estimating risk, they make use of actual causal relationships between failures and resulting losses. They are sensitive to process improvement, but hard to implement and may never get off the bottom to support consistent enterprisewide capital allocation. Bottom-up risk profiling starts from a mapping of the workflow in each business unit. At every point where operational failures can occur, profiling attempts to estimate the frequency of loss events, taking into account the controls that are in place. It then estimates the severity of the potential losses, accounting for any risk transfers, such as insurance. A simplistic implementation would attribute risk capital based on the product of the event probabilities and severities over the chosen horizon, summing the resulting operational value-at-risk for each different type of loss event. Despite its simplicity, such a method would still have the advantage of introducing model sensitivity to the workflow itself and to the quality of controls that are in place. A more sophisticated implementation builds on this approach by introducing statistical/actuarial methods, noting that both the event frequency and the severity should be modelled as probability distributions. The granularity of the unit of workflow used for analysis can be chosen pragmatically. For most types of failure event, it will be necessary to use severity data gathered over time, and it may, therefore, be necessary to include a model of inflation in order to scale losses into a time11D.571.3

For these purposes, the portfolio can be sampled periodically. The frequency of errors should be assessed, as well as the severity, measured in dollar terms. To gather data for the model, a favoured technique is dollar-unit sampling, which gives more weight to the big transactions without overlooking the small transactions. Using this method, sampled transactions are checked and corrected where indicated, then revalued. Comparison of this result against the original valuation provides an estimate of the severity the model error for this transaction. Extrapolation of these results from the sample to the overall population then provides an estimate of the frequency, severity and total model error in the portfolio. As before, it is possible to fit the frequency and severity distributions, and define critical values using confidence intervals, which in turn can be used to control the sampling process. Depending on the point in the lifecycle at which the error occurs for example, deal pricing a causal model must then be used to translate this error into a true loss estimate. Modelling operational risk in financial institutions is still in its infancy. But the process is sure to develop, and the trend is likely to be towards bottom-up or hybrid models that, wherever possible, model the real workflows despite the fact that it is harder to arrive at a full and consistent operational value-at-risk and capital allocation methodology by this means. Quantification is not an end in itself, but a step towards better management and for

Do cu
150

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

independent unit. Because of the complementary strengths and weaknesses of top-down and bottom-up profiling, some institutions have attempted to create hybrid models. New models appearing on the market such as PricewaterhouseCoopers OpVaR or NetRisks RiskOps tend to adopt either the bottomup approach or a hybrid approach integrating external loss event datasets. Notes:

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 151

al

The LTCM fiasco is full of lessons about: 1. Model risk 2.

Unexpected correlation or the breakdown of historical correlations

3. The need for stress-testing

4. The value of disclosure and transparency 6. The woes of investing in star quality 7. And investing too little in game theory.

5. The danger of over-generous extension of trading credit

The latter because LTCMs partners were playing a game up to hilt.

Do cu
152

John Meriwether, who founded Long-Term Capital Partners in 1993, had been head of fixed income trading at Salomon Brothers. Even when forced to leave Salomon in 1991, in the wake of the firms treasury auction rigging scandal (another marker buoy), Meriwether continued to command huge loyalty from a team of highly cerebral relative-value fixed income traders, and considerable respect from the street. Teamed up with a handful of these traders, two Nobel laureates, Robert Merton and Myron Scholes, and former regulator David Mullins, Meriwether and LTCM had more credibility than the average broker/dealer on Wall Street. It was a game, in that LTCM was unregulated, free to operate in any market, without capital charges and only light reporting requirements to the US Securities & Exchange Commission (SEC). It traded on its good name with many respectable counterparties as if it was a member of the same club. That meant an ability to put on interest rate swaps at the market rate for no initial margin - an essential part of its strategy. It meant being able to borrow 100% of the value of any top-grade collateral, and with that cash to buy more securities and post them as collateral for further borrowing: in theory it could leverage itself to infinity. In LTCMs first two full years of operation it produced 43% and 41% return on equity and had amassed an investment capital of $7 billion.

Meriwether was renowned as a relative-value trader. Relative value means (in theory) taking little outright market risk, since a long position in one instrument is offset by a short position in a similar instrument or its derivative. It means betting on small price differences which are likely to converge over time as the arbitrage is spotted by the rest of the market and eroded. Trades

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 27: CASE STUDY ON CREDIT RISK


Lessons From the Collapse of Hedge Fund, Longterm Capital Management Barings, the Russian meltdown, Metallgesellschaft, Procter & Gamble, LTCM. These are all events in the financial markets which have become marker buoys to show us where we went wrong, in the hope that we wont allow quite the same thing to happen again. The common weakness, in these cases, was the misguided assumption that our counterparty and the market it was operating in, were performing within manageable limits. But once those limits were crossed for whatever reason, disaster was difficult to head off. typical of early LTCM were, for example, to buy Italian government bonds and sell German Bund futures; to buy theoretically underpriced off-the-run US treasury bonds (because they are less liquid) and go short on-the-run (more liquid) treasuries. It played the same arbitrage in the interest-rate swap market, betting that the spread between swap rates and the most liquid treasury bonds would narrow. It played long-dated callable Bunds against Dm swaptions. It was one of the biggest players on the worlds futures exchanges, not only in debt but also equity products. To make 40% return on capital, however, leverage had to be applied. In theory, market risk isnt increased by stepping up volume, provided you stick to liquid instruments and dont get so big that you yourself become the market. Some of the big macro hedge funds had encountered this problem and reduced their size by giving money back to their investors. When, in the last quarter of 1997 LTCM returned $2.7 billion to investors, it was assumed to be for the same reason: a prudent reduction in its positions relative to the market. But it seems the positions werent reduced relative to the capital reduction, so the leverage increased. Moreover, other risks had been added to the equation. LTCM played the credit spread between mortgage-backed securities (including Danish mortgages) or double-A corporate bonds and the government bond markets. Then it ventured into equity trades. It sold equity index options, taking big premium in 1997. It took speculative positions in takeover stocks, according to press reports. One such was Tellabs whose share price fell over 40% when it failed to take over Ciena, says one account. A filing with the SEC for June 30 1998 showed that LTCM had equity stakes in 77 companies, worth $541 million. It also got into emerging markets, including Russia. One report said Russia was 8% of its book which would come to $10 billion! Some of LTCMs biggest competitors, the investment banks, had been clamouring to buy into the fund. Meriwether applied a formula which brought in new investment, as well as providing him and his partners with a virtual put option on the performance of the fund. During 1997, under this formula [see separate section below, titled UBS Fiasco], UBS put in $800 million in the form of a loan and $266 million in straight equity. Credit Suisse Financial Products put in a $100 million loan and $33 million in equity. Other loans may have been secured in this way, but they havent been made public. Investors in LTCM were pledged to keep in their money for at least two years. LTCM entered 1998 with its capital reduced to $4.8 billion. A New York Sunday Times article says the big trouble for LTCM started on July 17 when Salomon Smith Barney announced it was liquidating its dollar interest arbitrage positions: For the rest of the that month, the fund dropped about 10% because Salomon Brothers was selling all the things that Long-Term owned. [The article was written by Michael Lewis, former Salomon bond trader

al

11D.571.3

and author of Liars Poker. Lewis visited his former colleagues at LTCM after the crisis and describes some of the trades on the firms books] On August 17,1998 Russia declared a moratorium on its rouble debt and domestic dollar debt. Hot money, already jittery because of the Asian crisis, fled into high quality instruments. Top preference was for the most liquid US and G-10 government bonds. Spreads widened even between on- and off-the-run US treasuries. Most of LTCMs bets had been variations on the same theme, convergence between liquid treasuries and more complex instruments that commanded a credit or liquidity premium. Unfortunately convergence turned into dramatic divergence. LTCMs counterparties, marking their LTCM exposure to market at least once a day, began to call for more collateral to cover the divergence. On one single day, August 21, the LTCM portfolio lost $550 million, writes Lewis. Meriwether and his team, still convinced of the logic behind their trades, believed all they needed was more capital to see them through a distorted market. Perhaps they were right. But several factors were against LTCM. 1. Who could predict the time-frame within which rates would converge again?

from its constituent banks. In the third week of September, Bear Stearns, which was LTCMs clearing agent, said it wanted another $500 million in collateral to continue clearing LTCMs trades. On Friday September 18, 1998, New York Fed chairman Bill McDonough made a series of calls to senior Wall Street officials to discuss overall market conditions, he told the House Committee on Banking and Financial Services on October 1. Everyone I spoke to that day volunteered concern about the serious effect the deteriorating situation of Long-Term could have on world markets. Peter Fisher, executive vice president at the NY Fed, decided to take a look at the LTCM portfolio. On Sunday September 20, 1998, he and two Fed colleagues, assistant treasury secretary Gary Gensler, and bankers from Goldman and JP Morgan, visited LTCMs offices at Greenwich, Connecticut. They were all surprised by what they saw. It was clear that, although LTCMs major counterparties had closely monitored their bilateral positions, they had no inkling of LTCMs total off balance sheet leverage. LTCM had done swap upon swap with 36 different counterparties. In many cases it had put on a new swap to reverse a position rather than unwind the first swap, which would have required a markto-market cash payment in one direction or the other. LTCMs on balance sheet assets totalled around $125 billion, on a capital base of $4 billion, a leverage of about 30 times. But that leverage was increased tenfold by LTCMs off balance sheet business whose notional principal ran to around $1 trillion. The off balance sheet contracts were mostly nettable under bilateral Isda (International Swaps & Derivatives Association) master agreements. Most of them were also collateralized. Unfortunately the value of the collateral had taken a dive since August 17. Surely LTCM, with two of the original masters of derivatives and option valuation among its partners, would have put its portfolio through stress tests to match recent market turmoil. But, like many other value-at-risk (Var) modellers on the street, their worstcase scenarios had been outplayed by the horribly correlated behaviour of the market since August 17. Such a flight to quality hadnt been predicted, probably because it was so clearly irrational. According to LTCM managers their stress tests had involved looking at the 12 biggest deals with each of their top 20 counterparties. That produced a worst-case loss of around $3 billion. But on that Sunday evening it seemed the mark-to-market loss, just on those 240-or-so deals, might reach $5 billion. And that was ignoring all the other trades, some of them in highly speculative and illiquid instruments. The next day, Monday September 21, 1998, bankers from Merrill, Goldman and JP Morgan continued to review the problem. It was still hoped that a single buyer for the portfolio could be found - the cleanest solution. According to Lewiss article LTCMs portfolio had its second biggest loss that day, of $500 million. Half of that, says Lewis, was lost on a short position in five-year equity options. Lewis records brokers opinion that AIG had intervened in thin markets to drive up the option price to profit from LTCMs weakness. At that time, as was learned later, AIG was part of a consortium negotiating to buy LTCMs portfolio. By this time LTCMs capital base had dwindled to a mere $600 million. That evening, UBS, with its particular exposure on a $800 million credit, with $266
153

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

2. Counterparties had lost confidence in themselves and LTCM.

3. Many counterparties had put on the same convergence trades, some of them as disciples of LTCM. 4. Some counterparties saw an opportunity to trade against LTCMs known or imagined positions.

In these circumstances, leverage is not welcome. LTCM was being forced to liquidate to meet margin calls.

On September 2, 1998 Meriwether sent a letter to his investors saying that the fund had lost $2.5 billion or 52% of its value that year, $2.1 billion in August alone. Its capital base had shrunk to $2.3 billion. Meriwether was looking for fresh investment of around $1.5 billion to carry the fund through. He approached those known to have such investible capital, including George Soros, Julian Robertson and Warren Buffett, chairman of Berkshire Hathaway and previously an investor in Salomon Brothers [LTCM incidentally had a $14 million equity stake in Berkshire Hathaway], and Jon Corzine, then co-chairman and cochief executive officer at Goldman Sachs, an erstwhile classmate at the University of Chicago. Goldman and JP Morgan were also asked to scour the market for capital. But offers of new capital werent forthcoming. Perhaps these big players were waiting for the price of an equity stake in LTCM to fall further. Or they were making money just trading against LTCMs positions. Under these circumstances, if true, it was difficult and dangerous for LTCM to show potential buyers more details of its portfolio. Two Merrill executives visited LTCM headquarters on September 9, 1998for a due diligence meeting, according to a later Financial Times report (on October 30, 1998). They were provided with general information about the funds portfolio, its strategies, the losses to date and the intention to reduce risk. But LTCM didnt disclose its trading positions, books or documents of any kind, Merrill is quoted as saying. The US Federal Reserve system, particularly the New York Fed which is closest to Wall Street, began to hear concerns about LTCM

Do cu
11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

million invested as a hedge, sent a team to Greenwich to study the portfolio. The Feds Peter Fischer invited those three banks and UBS to breakfast at the Fed headquarters in Liberty Street the following day. The bankers decided to form working groups to study possible market solutions to the problem, given the absence of a single buyer. Proposals included buying LTCMs fixed income positions, and lifting the equity positions (which were a mixture of index spread trades and total return swaps, and the takeover bets). During the day a third option emerged as the most promising: seeking recapitalization of the portfolio by a consortium of creditors. But any action had to be taken swiftly. The danger was a single default by LTCM would trigger cross-default clauses in its Isda master agreements precipitating a mass close-out in the over-thecounter derivatives markets. Banks terminating their positions with LTCM would have to rebalance any hedge they might have on the other side. The market would quickly get wind of their need to rebalance and move against them. Mark-to-market values would descend in a vicious spiral. In the case of the French equity index, the CAC 40, LTCM had apparently sold short up to 30% of the volatility of the entire underlying market. The Banque de France was worried that a rapid close-out would severely hit French equities. There was a wider concern that an unknown number of market players had convergence positions similar or identical to those of LTCM. In such a one-way market there could be a panic rush for the door. A meltdown of developed markets on top of the panic in emerging markets seemed a real possibility. LTCMs clearing agent Bear Stearns was threatening to foreclose the next day if it didnt see $500 million more collateral. Until now, LTCM had resisted the temptation to draw on a $900 million standby facility that had been syndicated by Chase Manhattan Bank, because it knew that the action would panic its counterparties. But the situation was now desperate. LTCM asked Chase for $500 million. It received only $470 million since two syndicate members refused to chip in.

But what incentive would they have if they no longer had an interest in the profits? Chase insisted that any bailout would first have to return the $470 million drawn down on the syndicated standby facility. But nothing could be finalized that night since few of the representatives present could pledge $250 million or more of their firms money. The meeting resumed at 9.30 the next morning. Goldman Sachs had a surprise: its client, Warren Buffett, was offering to buy the LTCM portfolio for $250 million, and recapitalize it with $3 billion from his Berkshire Hathaway group, $700 million from AIG and $300 million from Goldman. There would be no management role for Meriwether and his team. None of LTCMs existing liabilities would be picked up, yet all current financing had to stay in place. Meriwether had until 12.30 to decide. By 1pm it was clear that Meriwether had rejected the offer, either because he didnt like it, or, according to his lawyers, because he couldnt do so without consulting his investors, which would have taken him over the deadline. The bankers were somewhat flabbergasted by Goldmans dual role. Despite frequent requests for information about other possible bidders, Goldman had dropped no hint at previous meetings that there was something in the pipeline. Now the banks were back to the consortium solution. Since there were only 13 banks, not 16, theyd have to put in more than $250 million each. Bear Stearns offered nothing, feeling that it had enough risk as LTCMs clearing agent. [Their special relationship may have been the source of some acrimony: LTCM had an $18 million equity stake in Bear Stearns, matched by investments in LTCM of $10 million each by Bear Stearns principals James Cayne and Warren Spector]. Lehman Brothers also declined to participate . In the end 11 banks put in $300 million each, Societe Generale $125 million, and Credit Agricole and Paribas $100 million each, reaching a total fresh equity of $3.625 billion. Meriwether and his team would retain a stake of 10% in the company. They would run the portfolio under the scrutiny of an oversight committee representing the new shareholding consortium. The message to the market was that there would be no fire-sale of assets. The LTCM portfolio would be managed as a going concern. In the first two weeks after the bail-out, LTCM continued to lose value, particularly on its dollar/yen trades, according to press reports which put the loss at $200 million to $300 million. There were more attempts to sell the portfolio to a single buyer. According to press reports the new LTCM shareholders had further talks with Buffett, and with Saudi prince Alwaleed bin talal bin Abdelaziz. But there was no sale. By mid-December, 1998 the fund was reporting a profit of $400 million, net of fees to LTCM partners and staff. In early February, 1999 there were press reports of divisions between banks in the bailout consortium, some wishing to get their money out by the end of the year, others happy to stay for the ride of at least three years. There was also a dispute about how much Chase was charging for a funding facility to LTCM. Within six months there were reports that Meriwether and some of his team wanted to buy out the banks, with a little help from their friend Jon Corzine, who was due to leave Goldman Sachs after its flotation in May, 1999.

Do cu
154

To take the consortium plan further, the biggest banks, either big creditors to LTCM, or big players in the overthe-counter markets, were asked to a meeting at the Fed that evening. The plan was to get 16 of them to chip in $250 million each to recapitalize LTCM at $4 billion.

The four core banks met at 7pm and reviewed a term sheet which had been drafted by Merrill Lynch. Then at 8.30 bankers from nine more institutions showed. They represented: Bankers Trust, Barclays, Bear Stearns, Chase, Credit Suisse First Boston, Deutsche Bank, Lehman Brothers, Morgan Stanley, Credit Agricole, Banque Paribas, Salomon Smith Barney, Societe Generale. David Pflug, head of global credit risk at Chase warned that nothing would be gained a) by raking over the mistakes that had got them in this room, and b) by arguing about who had the biggest exposure: they were all in this equally and together. The delicate question was how to preserve value in the LTCM portfolio, given that banks around the room would be equity investors, and yet, at the same time, they would be seeking to liquidate their own positions with LTCM to maximum advantage. It was clear that John Meriwether and his partners would have to be involved in keeping such a complex portfolio a going concern.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

By June 30, 1999 the fund was up 14.1%, net of fees, from last September. Meriwethers plan approved by the consortium, was apparently to redeem the fund, now valued at around $4.7 billion, and to start another fund concentrating on buyouts and mortgages. On July 6, 1999, LTCM repaid $300 million to its original investors who had a residual stake in the fund of around 9%. It also paid out $1 billion to the 14 consortium members. It seemed Meriwether was bouncing back. Post mortem The LTCM fiasco naturally inspired a hunt for scapegoats: 1. First in line were Meriwether and his crew of market professors. 2. Second were the banks which conspired to give LTCM far more credit, in aggregate, than theyd give a medium-size developing country. Particularly distasteful was the combination of credit exposure by the institutions themselves, and personal investment exposure by the individuals who ran them.

LTCM sources apparently complain that the market started trading against its known positions. That seems like special pleading. Meriwether et al must have been in the markets long enough to know they are merciless, and to have been just as merciless themselves. All they that take the sword shall perish with the sword. [Matthew, xxvi, 52] 2. Risk management by LTCM counterparties Practically the whole street had a blind spot when it came to LTCM. They forgot the useful discipline of charging non-bank counterparties initial margin on swap and repo transactions. Collectively they were responsible for allowing LTCM to build up layer upon layer of swap and repo positions. They believed that the first-class collateral they held was sufficient to mitigate their loss if LTCM disappeared. It may have been over time, but their margin calls to top up deteriorating positions simply pushed LTCM further towards the brink. Their credit assessment of LTCM didnt include a global view of its leverage and its relationship with other counterparties. A working group on highly leveraged institutions set up by the Basle Committee on Banking Supervision reported its findings in January, 1999 drawing many lessons from the LTCM case. It criticized the banks for building up such exposures to such an opaque institution. They had placed a heavy reliance on collateralization of direct mark-to-market exposures the report said. This in turn made it possible for banks to compromise other critical elements of effective credit risk management, including upfront due diligence, exposure measurement methodologies, the limit setting process, and ongoing monitoring of counterparty exposure, especially concentrations and leverage. The working group also noted that banks covenants with LTCM did not require the posting of, or increase in, initial margin as the risk profile of the counterparty changed, for instance as leverage i n c r e a s e d .( F o rf u l lr e p o r t s ,s e Sound e Practices for Banks Interactions with Highly Leveraged Institutions, and Banks Interactions with Highly Leveraged Institutions.) Another report in June, 1999 by the Counterparty Risk Management Policy Group, a group of 12 leading investment banks, suggested many ways in which information-sharing and transparency could be improved. It noted the importance of measuring liquidity risk, and improving market conventions and market practices, such as charging initial margin. 3. Supervision Supervisors themselves showed a certain blinkered view when it came to banks and securities firms relationships with hedge funds, and a huge fund like LTCM in particular. The US Securities & Exchange Commission (SEC) appears to assess the risk run by individual broker dealers, without having enough regard for what is happening in the sector as a whole, or in the firms unregulated subsidiaries. In testimony to the House Committee on Banking and Financial Services on October 1, 1998, Richard Lindsey, director of the SECs market regulation division recalled the following: When the commission learned of LTCMs financial difficulties in August, the commission staff and the New York Stock Exchange surveyed major broker-dealers known to have credit exposure to one or more large hedge funds. The results of our

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Merrill Lynch protested that a $22 million investment on behalf of its employees was not sinister. LTCM was one of four investment vehicles which employees could opt to have their deferred payments invested in. Nevertheless, that rather cosy relationship may have made it more difficult for credit officers to ask tough questions of LTCM. There were accusations of croney capitalism as Wall Street firms undertook to bail out, with shareholders money, a firm in which their officers had invested, or were thought to have invested, part of their personal wealth.

3. Third in line was the US Federal Reserve system. Although no public money was spent - apart from hosting the odd breakfast - there was the implication that the Fed was standing behind the banks, ready to provide liquidity until the markets became less jittery and more rational. Wouldnt this simply encourage other hedge funds and lenders to hedge funds to be as reckless in future? 4. Fourth culprit was poor information. Scant disclosure of its activities and exposures, by LTCM, as with many hedge funds, was a major factor in allowing it to put on such leverage. There was also no mechanism whereby counterparties could learn how far LTCM was exposed to other counterparties.

Do cu
LTCMs risk management.
11D.571.3

5. Fifth was sloppy market practice, such as allowing a non-bank counterparty to write swaps and pledge collateral for no initial margin as if it were part of a peer-group top-tier banks.

Despite the presence of Nobel laureates closely identified with option theory it seems LTCM relied too much on theoretical market-risk models and not enough on stress-testing, gap risk and liquidity risk. There was an assumption that the portfolio was sufficiently diversified across world markets to produce low correlation. But in most markets LTCM was replicating basically the same credit spread trade. In August and September 1998 credit spreads widened in practically every market at the same time. LTCM risk managers kidded themselves that the resultant net position of LTCMs derivatives transactions bore no relations to the billions of dollars of notional underlying instruments. Each of those instruments and its derivative has a market price which can shift independently, each is subject to liquidity risk.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

al

155

initial survey indicated that no individual broker-dealer had exposure to LTCM that jeopardized its required regulatory capital or its financial stability. As the situation at LTCM continued to deteriorate, we learned that although significant amounts of credit were extended to LTCM by US securities firms, this lending was on a secured basis, with collateral collected and marked-to-the-market daily. Thus, broker-dealers lending to LTCM was done in a manner that was consistent with the firms normal lending activity. The collateral collected from LTCM consisted primarily of highly liquid asseets, such as US treasury securities or G-7 country sovereign debt. Any shortfalls in collateral were met by margin calls to LTCM. As of the date of the rescue plan, it appears that LTCM had met all of its margin calls by US securities firms. Moreover, our review of the risk assessment information submitted to the commission suggests that any exposure to LTCM existed outside the US brokerdealer, either in the holding company or its unregistered affiliates. The sad truth revealed by this testimony is that the SEC and the NYSE were concerned only with the risk ratios of their registered firms and were ignorant and unconcerned, as were the firms themselves, about the markets aggregate exposure to LTCM. Bank of England experts note the absence of any covenant between LTCM and its counterparties that would have obliged LTCM to disclose its overall gearing. UK banks have long been in the habit of demanding covenants from non-bank counterparties concerning their overall gearing, the Bank of England says. 3. Was there moral hazard? The simple answer is yes, since the bailout of LTCM gave comfort that the Fed will come in and broker a solution, even if it doesnt commit funds. The Feds intervention also arguably tempted Meriwether not to accept the offer from Buffett, AIG and Goldman. The offer, heavily conditional though it was, shows that the LTCM portfolio had a perceived market value. A price might have been reached in negotiations between Buffett and Meriwether. Meriwethers argument [and the Feds] is that Buffetts deadline of 1230 didnt give Meriwether time to consult with LTCMs investors: he was legally unable to accept the offer.

The true test of moral hazard is whether the Fed would be expected to intervene in the same way next time. Greenspan pointed to a unique set of circumstances which made an LTCM solution particularly pressing. It seems questionable whether the Fed would act as broker for another fund bailout unless there were also such wide systemic uncertainties. 4. Was there truly a systemic risk? Since there was no global meltdown it is difficult to prove that there was a real danger of such a thing last September. But if the officers at the US Federal Reserve had waited to see what happened no-one would have thanked them after the event. In the judgment of this writer, the world financial system owes a lot to the prompt action of Greenspan, McDonough, Fisher and others at the Fed for their willingness to meet the problem fair and square. One shudders to think what the Bank of England (FSA) might have done, given its constructive ambiguity during the Barings crisis. But the counter-argument is also valid. Those Wall Street firms, once they knew the size of the problem, had only one sensible course of action, to bankroll a co-ordinated rescue. They had the resources to prevent a meltdown and it took only a night and a day to pool them. Mutual self-interest concentrates the mind wonderfully. It seems that in the developed world, since the early 1990s, financial firms have built up enough capital to meet most disasters the world can throw at them. Their mistakes in emerging markets were costly both for them and for the countries concerned, but they havent threatened the life of the world financial system. It seems the mechanisms for restructuring and acquisition are so swift that the demise of a financial firm simply means it will be stripped of the trash and carved up. In a down-cycle, however, the outcome could be very different. Moreover, the social costs of this financial overreach, followed by cannibalism, could be considerable. Systemic, no; ripe for concerted private and public intervention, yes. On September 29, 1999, six days after the LTCM bailout, US Federal Reserve chairman Alan Greenspan cut Fed fund rates by 25 basis points to 5.25%. On October 15, 1999 he cut them by another quarter. His critics associate these cuts directly with the bail-out of LTCM: it was an extra dose of medicine to make sure the recovery worked. Some sources attribute the cut to rumours that another hedge fund was in trouble. The more generous view is that, if the financial markets were in disarray, we aint seen nothing yet. Bruce Jacobs, who has followed the systemic implications of the 1929, 1987 and subsequent minicrashes, fearful of the dangers of globally traded derivatives, writes in a new book: Had LTC not been bailed out, the immediate liquidation of its highly leveraged bond, equity, and derivatives positions may have had effects, particularly on the bond market, rivaling the effects on the equity market of the forced liquidations of insured stocks in 1987 and margined stocks in 1929. Given the links between LTC and investment and commercial banks, and between its positions in different asset markets and different countries markets, the systemic risk much talked about in connection with the growth of derivatives markets may have become a reality. [Capital ideas and market realities, Blackwell, 1999, page 293]

Do cu
156

It is possible to argue that a market solution was found. Fourteen banks put up their own money, regarding it as a medium-term investment from which they expected to make a profit. From a value-preservation point of view it was an enlightened solution, even if it did seem to reward those whose recklessness had created the problem. Federal Reserve chairman Alan Greenspan defended the Feds action at the October 1 hearing in the House Committee on Banking and Financial Services as follows: This agreement [by the rescuing banks] was not a government bailout, in that Federal Reserve funds were neither provided nor ever even suggested. Agreements were not forced upon unwilling market participants. Credits and counterparties calculated that LTCM and, accordingly, their claims, would be worth more over time if the liquidation of LTCMs portfolio was orderly as opposed to being subject to a fire sale. And with markets currently volatile and investors skittish, putting a special premium on the timely resoluton of LTCMs problems seemed entirely appropriate as a matter of public policy.

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

11D.571.3

The losers Among the investors who lost their capital in LTCM (according to press reports) were:

LTCM partners - $1.1 billion ($1.5 billion at the beginning of 1998, offset by their $400 million stake in the rescued fund) Liechtenstein Global Trust - $30 million Bank of Italy - $100 million Credit Suisse - $55 million UBS - $690 million

Merrill Lynch (employees deferred payment) - $22 million Donald Marron, chairman, PaineWebber - $10 million Sandy Weill, co-ceo, Citigroup - $10 million McKinsey executives - $10 million Dresdner Bank - $145 million Bear Stearns executives - $20 million Sumitomo Bank - $100 million

Do cu

Prudential Life Corp - $5.43 million There were no reported numbers for the following organisations: Bank Julius Baer (for clients) Republic National Bank University of Pittsburgh

St Johns University endowment fund

UBS fiasco The biggest single loser in the LTCM debacle was UBS, which was forced to write off Sfr950 million ($682 million) of its exposure. The UBS involvement with LTCM pre-dated the merger of Union Bank of Switzerland and Swiss Bank Corporation in December 1998. Various heads rolled, including that of chairman Mathis Cabiallavetta (formerly chief executive of Union Bank of Switzerland), Werner Bonadurer, chief operating officer, Felix Fischer, chief risk officer, and Andy Siciliano, head of fixed income (who had been with SBC). UBSs deal with LTCM was a variation on other attempts to turn hedge funds into a securitized asset class with a protected downside. However in this case UBS was protecting the downside and LTCM was taking a good deal of the upside. The

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Notes:
Copy Right: Rai University

Assuming that LTCM performed well the deal provided UBS with steady, tax-efficient, return plus a share in the upside, through its $266 million stake. Nick Leeson enjoyed at Barings before March 1995). But it is clear now that UBS risk managers never faced the possibility of a collapse of LTCM which would have left them with $766 million exposure ($800 million hedge, $266 million investment, less $300 million option premium). That is, they didnt wake up to it, apparently, until around April 1998, in a post-merger review, when it was too late to do much about it. Credit Suisse Financial Products, which did a similar deal for $100 million, set that as the maximum it was prepared to lose. An interesting aspect of the UBS deal is to consider it from LTCMs point of view. LTCM secured $800 million new investment capital at Libor plus 50 basis points. It had a call on all returns above that level. UBSs obligation, to convert any shares it wanted to sell into a loan, provided LTCM with a synthetic seven-year put on its own performance. Was this an added incentive to roll the dice? It was a cheap gambling stake

al

Corrective response The Basle Committee on Banking Supervisions report on highly leveraged institutions (HLIs) in January 1999 suggests that supervisors demand higher capital charges for exposure to highly leveraged institutions where there is no limit to overall leverage: Possibly all exposures to all counterparties not covered by covenants on leverage should carry a higher weight. It further considers the possibility of extending a credit register for bank loans in the context of HLIs. The register would entail collecting, in a centralized place, information on the exposures of international financial intermediaries to single counterparties that have the potential to create systemic risk (ie major HLIs). Exposures would cover both on and off-balancesheet positions. Counterparties, supervisors and central banks could then obtain information about the overall indebtedness of the single counterparty.

sweetener for UBS was a structure that looked more like an option than a loan, turning any income into a capital gain, and an opportunity to invest directly in LTCM. For a premium of $300 million UBS sold LTCM a seven-year European call option on 1 million of LTCMs own shares, valued then at $800 million. To hedge the position - the only way it could be done - UBS bought $800 million worth of LTCM shares. UBS also invested $300 million (most of the $266 million premium income) directly in LTCM. Such an investment had to be held for a minimum of three years. This transaction was completed in three tranches in June, August and October 1997. The deal was calculated so that the $300 million premium was equivalent to a coupon of Libor plus 50 basis points over the seven years.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

157

I. How Leeson Broke Barings The activities of Nick Leeson on the Japanese and Singapore futures exchanges, which led to the downfall of his employer, Barings, are well-documented. The main points are recounted here to serve as a backdrop to the main topic of this chapter the policies, procedures and systems necessary for the prudent management of derivative activities. Barings collapsed because it could not meet the enormous trading obligations, which Leeson established in the name of the bank. When it went into receivership on February 27, 1995, Barings, via Leeson, had outstanding notional futures positions on Japanese equities and interest rates of US$27 billion: US$7 billion on the Nikkei 225 equity contract and US$20 billion on Japanese government bond (JGB) and Euroyen contracts. Leeson also sold 70, 892 Nikkei put and call options with a nominal value of $6.68 billion. The nominal size of these positions is breathtaking; their enormity is all the more astounding when compared with the banks reported capital of about $615 million. The size of the positions can also be underlined by the fact that in January and February 1995, Barings Tokyo and London transferred US$835 million to its Singapore office to enable the latter the meet its margin obligations on the Singapore International Monetary Exchange (SIMEX). Reported activities (Fantasy)

Do cu
158

The build-up of the Nikkei positions took off after the Kobe earthquake of January 17. This is reflected in Figure 10.1 - the chart shows that Lessons positions went in the opposite direction to the Nikkei - as the Japanese stock market fell, Leesons position increased. Before the Kobe earthquake, with the Nikkei trading in a range of 19,000 to 19,500, Leeson had long futures positions of approximately 3,000 contracts on the Osaka Stock Exchange. (The equivalent number of contracts on the Singapore International Monetary Exchange is 6000 because SIMEX contracts are half the size of the OSE.) A few days after the earthquake Leeson started

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

LESSON 28: CASE STUDY


The chain of events, which led to the collapse of Barings, Britains oldest merchant bank, is a demonstration of how not to manage a derivatives operation. The control and risk management lessons to be learnt from the collapse of this 200 year-old institution apply as much to cash positions as they do to derivative ones, but the pure leverage of derivatives makes it imperative that proper controls are in place. Since only a small amount of money (called a margin) is needed to establish a position, a firm could find it facing financial obligations way beyond its means. The leverage and liquidity offered by major futures contracts - such as the Nikkei 225, the S&P 500 or Eurodollars - means that these obligations, once in place, mount very quickly; thus bringing down an institution with lightning speed. This is in stark contrast to bad loans or cash investments whose ill-effects takes years to ruin an institution as demonstrated by the cases of British & Commonwealth Bank or Bank of Credit and Commerce International (BCCI). an aggressive buying programme, which culminated in a high of 19,094 contracts reached about a month later on February 17.

Barings Long Positions against the Nikkei 225 Average. Source: Datastream and Osaka Securities Exchanges But Leesons Osaka position, which was public knowledge since the OSE publishes weekly data, reflected only half of his sanctioned trades. If Leeson was long on the OSE, he had to be short twice the number of contracts on SIMEX. Why? Because Leesons official trading strategy was to take advantage of temporary price differences between the SIMEX and OSE Nikkei 225 contracts. This arbitrage, which Barings called switching, required Leeson to buy the cheaper contract and to sell simultaneously the more expensive one, reversing the trade when the price difference had narrowed or disappeared. This kind of arbitrage activity has little market risk because positions are always matched. But Leeson was not short on SIMEX, infact he was long approximately the number of contracts he was supposed to be short. These were unauthorised trades which he hid in an account named Error Account 88888. He also used this account to execute all his unauthorised trades in Japanese Government Bond and Euroyen futures and Nikkei 225 options: together these trades were so large that they ultimately broke Barings. Table 10.1 gives a snapshot of Leesons unauthorised trades versus the trades that he reported. For the rest of the chapter, contracts will be discussed or converted into SIMEX contract sizes. Unreported positions (Fact) The most striking point of Table 10.1 is the fact that Leeson sold 70,892 Nikkei 225 options worth about $7 billion without the knowledge of Barings London. His activity peaked in November and December 1994 when in those two months alone,

al
11D.571.3

Table 10.1

Fantasy versus Fact: Leeson's Positions as at End February 1995. Number of contracts1 nominal value in US$ amounts Reported 3 Actual4 Actual position in terms of open interest of relevant contract 2 49% of March 1995 contract and 24% of June 1995 contract.

of the Collapse of Barings, Ordered by the House of Commons, Her Majestys Stationery Office, 1995 he sold 34, 400 options. In industry parlance, Leeson sold straddles. i.e. he sold put and call options with the same strikes and maturities. Leeson earned premium income from selling well over 37,000 straddles over a fourteen month period. Such trades are very profitable provided the Nikkei 225 is trading at the options strike on expiry date since both the puts and calls would expire worthless. The seller then enjoys the full premium earned from selling the options. (see Fig 10.2 for a graphical presentation of the profit and loss profile of a straddle.) If the Nikkei is trading near the options strike on expiry, it could still be profitable because the earned premium more than offsets the small loss experienced on either the call (if the Tokyo market had risen) or the put (if the Nikkei had fallen.).

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Futures

Nikkei 225

JGB

15940 $8980 million

601 Euroyen $26.5 million

Options

Do cu
Nikkei 225 Nil
11D.571.3

1. Expressed in terms of SIMEX contract sizes which are half the size of those of the OSE and the TSE. For Euroyen, SIMEX and TIFFE contracts are of similar size.2. Open interest figures for each contract month of each listed contract. For the Nikkei 225, JGB and Euroyen contracts, the contract months are March, June, September and December.3. Leesons reported futures positions were supposedly matched because they were part of Barings switching activity, i.e. the number of contracts on either the Osaka Stock Exchange, the Singapore International Monetary Exchange or the Tokyo Stock Exchange.4. The actual positions refer to those unauthorized trades held in error account 8888. Source: The Report of the Board of Banking Supervision Inquiry into the Circumstances

ww Co w.p m dfw P iza D rd. F com Tr i


85% of March 1995 short 28034 contract $19650 and 88% million of June 1995 contract. short 6845 $350 million 5% of June 1995 contract, 1% of September 1995 contract and 1% of December 1995 contract.

30112 $2809 million

long 61039 $7000 million

37925 calls $3580 million 32967 puts $3100 million

Payoff Profile of a Straddle. The strike prices of most of Leesons straddle positions ranged from 18,500 to 20,000. He thus needed the Nikkei 225 to continue to trade in its pre-Kobe earthquake range of 19,000 - 20,000 if he was to make money on his option trades. The Kobe earthquake shattered Leesons options strategy. On the day of the quake, January 17, the Nikkei 225 was at 19,350. It ended that week slightly lower at 18,950 so Leesons straddle positions were starting to look shaky. The call options Leeson had sold were beginning to look worthless but the put options would become very valuable to their buyers if the Nikkei continued to decline. Leesons losses on these puts were unlimited and totally dependent on the level of the Nikkei at expiry, while the profits on the calls were limited to the premium earned. This point is key to understanding Leesons actions because prior to the Kobe earthquake, his unauthorized book, i.e. account 88888& showed a flat position in Nikkei 225 futures. Yet on Friday 20 January, three days after the earthquake, Leeson bought 10,814 March 1995 contracts. No one is sure whether he bought these contracts because he thought the market had over-reacted to the Kobe shock or because he wanted to shore up the Nikkei to protect the long position which arose from the option straddles. (Leeson did not hedge his option positions prior to the earthquake and his Nikkei 225 futures purchases after the quake cannot be construed as part of a belated hedging programme since he should have been selling rather than buying.)

Copy Right: Rai University

al
Figure 10.2

159

When the Nikkei dropped 1000 points to 17,950 on Monday January 23, 1995, Leeson found himself showing losses on his two-day old long futures position and facing unlimited damage from selling put options. There was no turning back. Leeson, tried single-handedly to reverse the negative post-Kobe sentiment that swamped the Japanese stock market. On 27 January, account 88888 showed a long position of 27,158 March 1995 contracts. Over the next three weeks, Leeson doubled this long position to reach a high on 22nd February of 55,206 March 1995 contracts and 5640 June 1995 contracts. The large falls in Japanese equities, post-earthquake, also made the market more volatile. This did not help Leesons short option position either - a seller of options wants volatility to decline so that the value of the options decrease. With volatility on the rise, Leesons short options would have shown losses even if the Tokyo stock market had not plunged. Leeson engaged in unauthorized activities almost as soon as he started trading in Singapore in 1992. He took proprietary positions on SIMEX on both futures and options contracts. (His mandate from London allowed him to take positions only if they were part of switching and to execute client orders. He was never allowed to sell options.) Leeson lost money from his unauthorized trades almost from day one. Yet he was perceived in London as the wonder boy and turbo-arbitrageur who single-handedly contributed to half of Barings Singapores 1993 profits and half of the entire firms 1994 profits. The wide gap between fact and fantasy is illustrated in table 10.2 which not only shows the magnitude of Leesons recent losses but the fact that he always lost money. In 1994 alone, Leeson lost Barings US$296 million; his bosses thought he made them US$46 million, so they proposed paying him a bonus of US$720,000.
Table10.2 Period 1Jan1993to31Dec1993 1Jan1994to31Dec1994 1Jan1995to31Dec1995 FactsversusFantasy:Profitability of Leeson's Trading Activities. Reported (milion) +GBP8.83 Actual (milion) -GBP21

both his client accounts. However he can only do this after he has declared the bid and offer price in the pit and no other member has taken it up. Under SIMEX rules, the Member must declare the prices three times. A cross-trade must be executed at market-price. Leeson entered into a significant volume of cross transactions between account 88888 and account 92000 (Barings Securities Japan - Nikkei and JGB Arbitrage), account 98007 (Barings London - JGB Arbitrage) and account 98008 (Barings London - Euroyen Arbitrage). After executing these cross-trades, Leeson would instruct the settlements staff to break down the total number of contracts into several different trades, and to change the trade prices thereon to cause profits to be credited to switching accounts referred to above and losses to be charged to account 88888. Thus while the cross trades on the Exchange appeared on the face of it to be genuine and within the rules of the Exchange, the books and records of BFS, maintained in the Contac system, a settlement system used extensively by SIMEX members, reflected pairs of transactions adding up to the same number of lots at prices bearing no relation to those executed on the floor. Alternatively, Leeson would enter into cross trades of smaller size than the above but when these were entered into the Contac system he would arrange for the price to be amended, again enabling profit to be credited to the switching account and losses to be charged to account 88888.
Table 10.3 No. of contracts in account '88888' 2 Buy Price per SIMEX Average Value per Price per SIMEX CONTACT JPY millions

ww Co w.p m dfw P iza D rd. F com Tr i


Sell 20 6984 January 23 3000 January 23 January 18950 17810 19019 18815 18147 18318 18378 66173 26715 66413 28223 8082 17810 18220 18210 (71970) 91528

Do cu

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Cumulativeactual1 (milion) -G B P2 3

25 10047 January 26 16276 January

+GBP28.529 +GBP18.567

-GBP185 -GBP619

-GBP208 -GBP827

1.The cumulative actual represents Leeson's cumulative losses carried forward. Source: Report of the Board of Banking Supervision Inquiry into the Circumstances of the Collapse of Barings, Ordered by the House of Commons, Her Majesty's StationeryOffice,1995.

Table 10.3 below is an example of how Leeson manipulated his books to show a profit on Barings switching activity. 1. This table is Figure 5.2 of Report of the Board of Banking Supervision Inquiry into the Circumstances of the Collapse of Barings, Ordered by the House of Common, Her Majestys Stationery Office, 1995.2. This column represents the size of Nikkei 225 cross-trades traded on the floor of SIMEX for the dates shown, with the other side being in account 92000. The BoBS report notes In each instance, the entries in the Contac system reflected a number of spurious contract amounts at prices different to those transacted on the floor, reconciling to the total lot size originally traded. This had the effect of giving the impression from a review of the reported trades in account 92000& that these had taken place at different times during the day. This was necessary to deceive Barings Securities Japan into believing the reported profitability in account 92000 was a result of authorised arbitrage activity. The effect of this manipulation was to inflate reported profits in account 92000& at the expense of account 88888, which was also incurring substantial losses from the unauthorised trading positions taken by Leeson. In addition to
11D.571.3

The cross-trade How was Leeson able to deceive everyone around him? How was he able to post profits on his switching activity when he was actually losing? How was he able to show a flat book when he was taking huge long positions on the Nikkei and short positions on Japanese interest rates? The Board of Banking Supervision (BoBS) of the Bank of England which conducted an investigation into the collapse of Barings believes that the vehicle used to effect this deception was the cross trade.1 A cross trade is a transaction executed on the floor of an Exchange by just one Member who is both buyer and seller. If a Member has matching buy and sell orders from two different customer accounts for the same contract and at the same price, he is allowed to cross the transaction (execute the deal) by matching

160

Copy Right: Rai University

al
240 1508 (73332) 92020 492 1367 2245 148193 149560

Value per Profit/(Loss) CONTACT to '92000' JPY millions JPY millions

(1362)

crossing trades on SIMEX between account 88888& and the switching accounts, Leeson also entered fictitious trades between these accounts which were never crossed on the floor of the Exchange. The effect of these [off-market trades, which were not permitted by SIMEX], was again to credit the switching accounts with profits whilst charging account 88888 with losses. The bottom line of all these cross-trades was that Barings was counterparty to many of its own trades. Leeson bought from one hand and sold to the other, and in so doing did not lay off any of the firms market risk. Barings was thus not arbitraging between SIMEX and the Japanese exchanges but taking open (and very substantial) positions, which were buried in account 88888. It was the profit and loss statement of this account which correctly represented the revenue earned (or not earned) by Leeson. Details of this account were never transmitted to the treasury or risk control offices in London, an omission which ultimately had catastrophic consequences for Barings shareholders and bondholders.
2

II. Lessons from Leeson Numerous reports have come out over the last three years with recommendations on best practices in risk management. (see key risk concepts - risk control.) Barings violated almost every recommendation. Because its management singularly failed to institute a proper managerial, financial and operational control system, the firm did not catch on, in time, to what Leeson was up to. Since the foundations for effective controls were weak, it is not surprising that the firms flimsy system of checks and balances failed at a number of operational and management levels and in more than one location. The lessons from the Barings collapse can be divided into five main headings: b. Senior management involvement c. Adequate capital d. Poor control procedures e. Lack of supervision

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Figure 10.3 , below, shows the number of cross-trades executed by Leeson. It is the difference between the solid line which represents all the Nikkei trades of account 92000 not crossed into account 88888 and the broken line which reflects the position Leeson reported to Barings management. The figure graphically illustrates the chasm between reported and actual positions. For example, Barings management thought the firm had a short position of 30,112 contracts on SIMEX on 24 February; in fact it was long 21,928 contracts after ignoring the trades crossed with account 88888.

Do cu
11D.571.3

Figure 10.3 Graph to show the Nikkei Position of Account 92000. Reproduced by permission from the Report of the Board of Banking Supervision Inquiry into the Circumstances of the Collapse of Barings.

II. Lessons from Leeson Numerous reports have come out over the last three years with recommendations on best practices in risk management. (see key risk concepts - risk control.) Barings violated almost every recommendation. Because its management singularly failed to institute a proper managerial, financial and operational control system, the firm did not catch on, in time, to what Leeson was up to. Since the foundations for effective controls were weak, it is not surprising that the firms flimsy system of checks and balances failed at a number of operational and management levels and in more than one location. The lessons from the Barings collapse can be divided into five main headings:

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

a. Segregation of front and back-office The management of Barings broke a cardinal rule of any trading operation - they effectively let Leeson settle his own trades by putting him in charge of both the dealing desk and the back office. This is tantamount to allowing the person who works a cash-till to bank in the days takings without an independent third party checking whether the amount banked it at the end of the day reconciles with the till receipts. The backoffice records, confirms and settles trades transacted by the front office, reconciles them with details sent by the banks counterparties and assesses the accuracy of prices used for its internal valuations. It also accepts/releases securities and payments for trades. Some back offices also provide the regulatory reports and management accounting. In a nutshell, the back office provides the necessary checks to prevent unauthorised trading and minimise the potential for fraud and embezzlement. Since Leeson was in charge of the back office, he had the final say on payments, ingoing and outgoing confirmations and contracts, reconciliation statements, accounting entries and position reports. He was perfectly placed to relay false information back to London. Abusing his position as head of the back-office, Leeson suppressed information on account 88888. This account was set up in July 1992 - it was designated an error account in Barings Futures Singapore system but as a Barings London client account in Simexs system. But Barings London did not know of its existence since Leeson had asked a systems consultant, Dr Edmund Wong, to remove error account 88888 from the daily reports which BFS sent electronically to London. This state of affairs existed from on or around 8 July 1992 to the collapse of Barings on 26 February 1995. (Information on account 88888 was however still contained in the margin file sent to London.) Error accounts are set up to accommodate trades that cannot be reconciled immediately. A compliance officer investigates the trade, records them on the firms books and analyses how it affects the firms market risk and profit and loss. Reports of error accounts are normally sent to senior officers of the firm. Barings management compounded their initial mistake of not segregating Leesons duties by ignoring warnings that prolonging the status quo would be dangerous. An internal
161

al

a. Segregation of front and back-office

auditors report in August 1994 concluded that his dual responsibility for both the front and back offices was an excessive concentration of powers. The report warned that there was a significant general risk that the general manager (Mr Nick Leeson) could override the controls. The audit team recommended that Leeson be relieved of four duties: supervision of the back-office team, cheque-signing, signingoff SIMEX reconciliations and bank reconciliations. Leeson never gave up any of these duties even though Simon Jones, regional operations manager South Asia and chief operating officer of Barings Securities Singapore, had told the internal audit team that Leeson will with immediate effect cease to perform the[se] functions. b. Senior management involvement The crux of the Barings collapse lay in senior managements lackadaisical attitude to its derivative operations in Singapore. Every major report on managing derivative risks has stressed the need for senior management to understand the risks of the business; to help articulate the firms risk appetite and draft strategies and control procedures needed to achieve these objectives. Senior managers at Barings can be found wanting in all these areas. For example, while they were happy to enjoy the fruits of the success of the Singapore branch, they were not so keen on providing adequate resources to ensure a sound risk management system for a unit that alone ostensibly accounted for one-fifth of its 1993 profits and almost half of its 1994profits. The senior managements response to the internal auditors report for a suitably experienced person to run Singapores back office was that there was not enough work for a full-time treasury and risk manager even if the role incorporated some compliance duties. No senior managers in London checked on whether key internal audit recommendations on the Singapore backoffice had been followed up. Barings senior management had a very superficial knowledge of derivatives and did not want to probe too deeply into an area that was bringing in the profits. Arbitraging the price differences between two futures contracts is a low-risk strategy. How could it then generate such high profits if the central axiom of modern finance theory is low risk-low return, high risk-high return? And if such a low-risk and relatively simple arbitrage could yield so much profits, why were Barings better-capitalised rivals (all with much larger proprietary trading teams) not pursuing the same strategy? The profitability of the business was marvelled at by all senior managers, but never analysed or properly assessed at Management Committee meetings. Senior managers did not even know the breakdown of Leesons reported profits. They erroneously assumed that most of the switching profit came from Nikkei 225 arbitrage, which actually only generated profits of US$7.36 million for 1994, compared with US$37.5 million for JGB arbitrage. No wonder Peter Baring, ex-chairman of Barings, told the bobs that he found the earnings pleasantly surprising since he did not even know the breakdown. Andrew Tuckey, ex-deputy chairman, when asked whether there had ever been any discussion about the long term sustainability of the business, told the same investigation, Yes...in very general terms. We seemed to be making money out of this business and if we can do it, cant somebody else do it? How can we protect our
162

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

position?.... Senior management naively accepted that this business was a goldmine with little risk. Of Ron Baker (head of the Financial Products Group) and Mary Walz (Global head of Equity Financial Products), two of Barings most senior derivatives staff and Leesons bosses, the BoBS report concluded, Neither were familiar with the operations of the SIMEX floor. Both claim that they thought that the significant and large profits were possible from a competitive advantage that BFS had arising out of its good inter-office communications and its large client order flow. As the exchanges were open and competitive markets, this suggests a lack of understanding of the nature of the business and the risks (including compliance risks) inherent in combining agency and proprietary trading. Given the huge amounts of cash that Barings had to borrow to meet the margin demands of SIMEX, senior managers were almost negligent in their duties when they did not press Leeson for more details of his positions or/and the Credit department for client details. Members of the Asset and Liability Committee (ALCO), which monitored the banks market risk, expressed concern at the size of the position, but took comfort in the thought that the firms exposure to directional moves in the Nikkei was negligible since they were arbitrage (and hedged) positions. This same misplaced belief led management to ignore market concerns about Barings large positions, even when queries came from high level and reputable sources including a query on January 27 1995 from the Bank for International Settlements in Basle. The bank was haemorrhaging cash and still London took no steps to investigate Singapores requests for funds - partly because senior management assumed that a proportion of these funds represented advances to clients. Even then the complacency is still baffling. BFS had only one third-party client of its own - Banque Nationale de Paris in Tokyo. The rest were clients of the London and Tokyo offices. Either London or Tokyos existing customers had suddenly become very active or Leeson had recently gone out and won some very lucrative accounts or Tokyo or London had a new supersalesman who had brought new business with him. Yet no enquiries were made on this front, which displays a blas attitude about a potentially important source of revenue. c. Adequate capital There are two aspects to this issue - an institution must have sufficient capital to withstand the impact of adverse market moves on its outstanding positions as well as enough money to keep these positions going. Barings management thought that Leesons positions were market neutral and were thus quite happy to fund margin requirements till the contracts expired. In the end, these collateral calls from SIMEX and OSE proved too much to bear (as was pointed out earlier, they were larger than Barings capital base) and the 200-year old institution was forced to call in the receivers. It was funding risk that seriously wounded Barings but the terminal shot came from the discovery that the enormous positions were unhedged. Funding risk also nearly sank Metallgesellschaft, a German industrial company, in 1993. In that year alone, Metallgesellschafts US subsidiary paid out $900 million in margins for its crude oil hedges on NYMEX. When the American subsidiary asked for a cash infusion to meet further
11D.571.3

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

d. Poor control procedures In many trading houses, not only is there a separation of operational duties between the front and back-office (absent in Barings), but there is also a unit independent of both to provide an additional layer of checks and balances. i. Funding ii. Credit risk iii. Market risk iv. No limits

i. Funding Barings control procedures were sloppy. No where is this point better illustrated than in the way it funded BFS (or more accurately Leesons unauthorised positions). Barings did not require Leeson, to distinguish between variation margin needed to cover proprietary and customer trades; neither did it have a system to reconcile the funds Leeson requested to his reported positions and/or that of its client positions. (The London office for example could have used the Standard Portfolio Analysis of Risk (SPAN) margining programme to calculate margins and would then have realised that the amount of money Leeson was requesting was significantly more than that called for under Simexs margining rules.) London simply, automatically, remitted to Leeson the sum of money he asked for, despite misgivings felt by many senior operational staff about the accuracy of his data. The fact that no one even asked Leeson to justify his requests is all the more astounding given the size of his demands. At the end of Dec 1994, the cumulative funding of BFS by Barings London and Tokyo stood at US$354 million. In the first two months of 1995, this figure increased by US$835 million to US$1.2 billion. The BoBS
11D.571.3

Do cu

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

margin obligations, the parent refused and closed out the NYMEX contracts at a loss. The latter only survived because a consortium of German banks quickly put together a rescue package of $2 billion. Both the Barings and Metallgesellschaft stories highlight the need for institutions to pay more attention to the interim funding needs of hedged and semi-hedged positions. But the parallel ends here. Barings senior managers continued to fund Leesons activities because they thought they were paying margins on hedged positions (as well as those of their clients) whereas they were actually losing money on outright bets on the Tokyo stock market. Metallgesellschaft, on the other hand, refused to grant any more interim finance because they thought they were losing money on contracts which were infact bona fide hedges for the companys long-term obligations. Both incidents illustrate the need for senior managers to be more knowledgeable about hedged positions because the issues facing them are complex in many cases. As it turned out Barings had significant market risk from its naked positions so even if it had managed to borrow enough money to cover its margin costs till the contracts expired, it would have been unable to withstand the substantial losses it would suffer on expiry. Agents appointed by Barings administrators closed out the contracts at losses totalling US$1.4 billion, so Barings inability to meet its margin obligations at the end of February just hastened its demise. Its fate had been sealed at the end of January when Leeson had an unauthorised Nikkei exposure of about 30,000 contracts.

inquiry team notes, We described...how [Tony] Railton [Futures and option settlements senior clerk] discovered in February 1995 that the breakdown of the total US Dollar request was meaningless, and that the BFS clerk knew the total funding requirement for that day and made up the individual figures in the breakdown to add up to the required total. From November 1994, BFS usually requested a round sum number split equally between US dollars for client accounts and proprietary positions. The BoBS team notes, Tony Hawes [group treasurer] confirmed that he identified this feature of the requests: That was one of the main reasons why during February 1995 I paid two visits to Singapore. If the US Dollar requests had been in relation to genuine positions taken by clients and house [Barings itself], on any one day we consider it unlikely for the margin requests for these two sets of positions to be identical; as for having the requests split 50:50 most days, this is in our view is beyond all possibility. Tony Hawes appears to agree with this view. He told us that: It was just one of the factors that made me distrust this information... It was quite too much of a coincidence. ...Throughout I put it down to poor book-keeping and sloppy treasury management in Barings Futures [BFS]. David Hughes [Treasury Department manager] also told us that the 50:50 split: was a cause for concern...we said, this cannot be right. He explained that: I do not think we could have house positions and client positions running totally in tandem. [Brenda] Granger [manager, futures and options settlements] confirmed that she would have spoken to Hughes about the split. She added: We would joke about Singapore, Why dont we send somebodys mother [anyone] out there to run the department since Nick is so busy now?. Staff in London could not reconcile funds remitted to Singapore to both proprietary in-house and individual client positions. But no remedial action was taken. Their cavalier attitude to reconciliation is illustrated by Figure 10.43 which shows total funds remitted to Singapore ostensibly to pay customer margins. place fig 10.4 here The solid line in Figure 10.4 shows the total funds sent to BFS by Barings Securities London (BSL) - the entity to which all customer trades of London were booked; the broken line the amount of money funded by Barings Securities Group Treasury in London, this funding was known in the firm as the top-up balance. The Group Treasury advanced this money, on behalf of clients, because it was not always possible for clients to transfer money to Barings in time to meet SIMEX intra-day margin calls. (The bank was expected to recover from clients these advances as quickly as possible.) Figure 10.4 shows that BSGT had to consistently advance a substantial portion of the funds earmarked for margins for client positions. The graph shows that from 1 January to 24 February 1995, the proportion of genuine client moneys which were transferred to BFS fell as a proportion of the total funding Indeed on 21 February 1995, BSGT had to advance all the client margins of some US$440 million. On 24 February, only US$50 million of the US$540 million sent to Singapore to cover client positions had been recovered from individual clients (i.e. the difference between the solid and broken lines). Barings control did not reconcile the top-up payments to individual client balances - if it did it would have discovered that it was sending

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

163

out far too much money just to cover the margin calls of clients.

different markets have different settlement systems, creating liquidity and funding risk. e. Lack of supervision Theoretically Leeson had lots of supervisors; in reality none exercised any real control over him. Barings operated a matrix management system, where managers who are based overseas report to local administrators and to a product head (usually based at head office or the regional headquarters).Leesons Singapore supervisors were James Bax, regional manager South Asia and a director of BFS, and Simon Jones, regional operations manager South Asia, also a director of BFS and chief operating officer of Barings Securities Singapore. Jones and the heads of the support functions in Singapore also had reporting lines to the Group-wide support functions in London. Yet both Bax and Jones told the BoBS inquiry that they did not feel operationally responsible for Leeson. Bax felt Leeson reported directly to Baker or Walz on trading matters and to Settlements/Treasury in London for backoffice matters. Jones felt his role in BFS was limited only to administrative matters and concentrated on the securities side of Barings activities in South Asia. Leesons reporting lines for product profitability are not clear cut since his supervisors have disputed who was directly responsible for him from January 1, 1994. His ultimate boss was Ron Baker, head of the financial products group. But who had day-to-day control over him? Mary Walz, global head of equity financial products, insists that she thought Fernando Gueler, head of equity derivatives proprietary trading in Tokyo was in charge of Leesons intra-day activities since the latters switching activities were booked in Tokyo. However, Gueler insists that in October 1994, Baker told him that Leeson would report to London and not Tokyo. He thus assumed that Walz would be in charge of Leeson. Walz herself still disputes this claim. Tapes of telephone conversations show that Leeson spoke frequently to both Gueler and Walz. (The bottom line however is that Gueler reported to Walz.) Two important incidents vividly illustrate the cavalier attitude Barings had towards supervising Leeson. The first involves two letters to BFS from SIMEX. In a letter dated 11 January, 1995; SIMEX senior vice-president for audit and compliance Yu Chuan Soo, complained about a margin shortfall of about US$116 million in account 88888 and that Barings had appeared to break SIMEX rule 822 by previously financing the margin requirements of this account, (which appeared in SIMEXs system as a customer account.) SIMEX also noted that the initial margin requirement of this account was in excess of US$342 million. BFS was asked to provide a written explanation of the margin difference on account 88888 and of its inability to account for the problem in the absence of Leeson. No warning lights went off in Singapore. No one investigated who this customer really was and why he was having difficulties in meeting margin payments or why he had such a huge position; or the credit risk Barings faced if this customer defaulted on the margins that Barings had paid on its behalf. A copy of the letter was not sent to operational heads in London. Simon Jones did not press Leeson for an explanation; indeed he dealt with the matter by allowing Leeson to draft Barings response to SIMEX. The second incident did come to the attention of London but again was dealt with

ii. Credit risk The credit risk implication of the client advances represented by the top-up balances was significant if the total funds remitted to Singapore was to meet genuine client margin calls. Yet the Credit risk department did not question why Barings was lending over US$500 million to its clients to trade on SIMEX, and collecting only 10% in return. It did not seem to have an idea of who these clients were, yet Barings financial losses would have been significant if some of these clients defaulted. The Credit Committee under George Maclean insists that it was Barings policy to finance client margins until they could be collected. But no limit per client or on the total top-up funds was set. Indeed clients who were advanced money this way appear not to have undergone any credit approval process. The Credit Committee never formally considered the credit aspects of the top-up balance although they could see the growth of these advances as recorded on the balance sheets. Plainly put, the credit risk controls of Barings Securities were shambolic. iii. Market risk Because Leeson controlled the back office and because Barings had no independent unit checking the accuracy of his reports, the market risk reports generated by Barings risk management unit and passed on to ALCO were inaccurate. Leesons futures positions showed no market risk because trades were supposedly offset by opposite transactions on another exchange. Peter Baring and Baring shareholders have learnt too painfully the meaning of garbage-in, garbage-out because a system is only as good as the data it receives iv. No limits Barings did not impose any gross position limits on Leesons proprietary trading activities because it felt that there was little market risk attached to arbitrage trades since at the close of business, the position must be flat. But the Barings collapse has shown that placing gross position limits on each side of an arbitrage book is perhaps not such a bad idea after all. While it is true that an arbitrage book has little price (directional ) risk, it has basis and settlement risk. The former arises because prices in two markets do not always move in tandem and the latter because

Do cu
164

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

Figure 10.4 Top-up Funding from BSGT to BSL and Margin Balances from BFS from 1 January 1995. Reproduced be permission from the Report of the Board of Banking Supervision Inquiry into the Circumstances of the Collapse of Barings

al

11D.571.3

Conclusion The Nikkei 225 and JGB futures contracts traded by Leeson were the simplest of derivative instruments. They were also the most transparent - since they were listed contracts, Leeson was required to pay (or receive) daily margins and so needed funds from London. In January and February 1995 alone, he asked for US$835 million. His could not hide his build-up of positions on the OSE because the exchange publishes weekly numbers. All his rivals could see his enormous positions, and many assumed that the positions were hedged because such naked positions were out of all proportion to the firms capital base or even those of other players. His senior managers also assumed Leesons were hedged. But unlike outsiders who had to assume that these positions were hedged, Barings management did not. They could have done something about it - they could have probed Leeson, they could have tried to obtain more information from their internal information systems, and most of all they could have heeded the warning signals available in late 1994 and throughout January and February of 1995. But although Barings fate was only sealed in the final weeks of February, the seeds of its destruction were sown when senior management entered new businesses without ensuring adequate support and control systems. The collapse of Britains oldest merchant bank was an extreme example of operations risk, i.e. the risk that deficiencies in information systems or internal controls result in unexpected loss. Will it happen again? Certainly, if senior managers of firms continue to disregard rules and recommendations which have been drawn up to ensure prudent risk-taking. Notes:

Do cu

11D.571.3

ww Co w.p m dfw P iza D rd. F com Tr i


Copy Right: Rai University 165

unsatisfactorily, perhaps because Barings personnel themselves are unsure about what really happened. At the beginning of February 1995, Coopers & Lybrand brought to the attention of London and Simon Jones the fact that US$83 million apparently due from Spear, Leeds & Kellogg, a US investment group, had not been received. No one is sure how this multimillion dollar receivable came about. One version of events is that BFS, through Leeson, had traded or broked an over-thecounter deal between Spear, Leeds & Kellogg, and BNP, Tokyo. The transaction involved 200 50,000 call options, resulting in a premium of 7.778 billion (US$83 million). The second version was that an operational error had occurred; i.e. a payment had been made to a wrong third-party in December 1994. Both versions had very serious control implications for Barings. If Leeson had sold or broked an OTC option, then he had engaged in an unauthorised activity. Yet he was not admonished for doing so; nor is there any record of Barings management taking any steps to ensure that it did not happen again. If the SLK receivable was an operational error, Barings had to tighten up its back-office procedures.

RISK MANAGEMENTFOR GLOBAL FINANCIAL SERVICES

al

Das könnte Ihnen auch gefallen