Sie sind auf Seite 1von 94

REAL OPTIONS VALUATION, INC.

This manual, and the software described in it, are furnished under license and may only be used or copied in accordance
with the terms of the end-user license agreement. Information in this document is provided for informational purposes only,
is subject to change without notice, and does not represent a commitment as to merchantability or fitness for a particular
purpose by Real Options Valuation, Inc. No part of this manual may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying and recording, for any purpose without the express written
permission of Real Options Valuation, Inc. Materials based on copyrighted publications by Dr. Johnathan Mun, Ph.D.,
MBA, MS, BS, CRM, CFC, FRM, MIFC, Founder and CEO, Real Options Valuation, Inc., and creator of the software.
Written, designed, and published in the United States of America. Microsoft® is a registered trademark of Microsoft
Corporation in the U.S. and other countries. Other product names mentioned herein may be trademarks and/or registered
trademarks of the respective holders.

Analytics, algorithms, and development of CMOL were by Real Options Valuation, Inc. Financial models as well as Basel
expertise were provided by Risk Business Latin America. CMOL is protected by multiple global patents and patents pending as
well as software copyrights.

© Copyright 2005–2015 by Dr. Johnathan Mun. All rights reserved.


Real Options Valuation, Inc.
4101F Dublin Blvd., Ste. 425
Dublin, California 94568 U.S.A.
Phone +1.925.271.4438 • Fax +1.925.369.0450
admin@realoptionsvaluation.com
www.risksimulator.com
www.realoptionsvaluation.com
Table of Contents
1. INTRODUCTION TO CMOL SOFTWARE ...................................................... 1 
1.1 CMOL Modules .......................................................................................................................................... 3 
1.2 Installation Requirements and Procedures ............................................................................................ 3 
1.3 Licensing ........................................................................................................................................................ 4 
Procedures: Licensing ................................................................................................................................. 4 

2. CREDIT RISK .........................................................................................................5 


2.1 CMOL’s Credit Risk Module ................................................................................................................... 5 
2.2 Credit Economic and Regulatory Capital .............................................................................................. 6 
Procedures: Credit Risk Module .............................................................................................................. 8 
2.3 Basel Credit Risk Models and Economic Capital ................................................................................ 9 
Retail Loans: Residential Mortgage Exposures .................................................................................... 9 
Retail Loans: Qualifying Revolving Retail Exposures ........................................................................ 9 
Retail Loans: Other Retail Exposures .................................................................................................... 9 
Corporate Loans: Corporate, Sovereign, Bank, and Corporate Loans......................................... 10 

3. MARKET RISK ..................................................................................................... 11 


3.1 CMOL Market Risk Module .................................................................................................................. 11 
Procedures: Market Risk .......................................................................................................................... 13 
3.2 Central Bank Market Risk ....................................................................................................................... 14 

4. LIQUIDITY RISK (ALM) .................................................................................... 16 


4.1 ALM: Net Interest Margin and Economic Value of Equity ........................................................... 16 
Procedures: ALM Interest Rate Liquidity Risk .................................................................................. 22 
4.2 ALM: Liquidity Risk with Scenario Analysis and Stress Testing.................................................... 24 
Procedures: ALM Liquidity Risk with Scenario and Sensitivity Analysis..................................... 27 

5. ANALYTICAL MODELS ....................................................................................28 


5.1 Credit and Market Risk Analytical Models .......................................................................................... 28 
5.2 Credit Structural Models .......................................................................................................................... 28 
5.3 Credit Time-Series Models ...................................................................................................................... 29 
5.4 Credit Portfolio Models ........................................................................................................................... 29 
5.5 Credit Models ............................................................................................................................................. 30 
Procedures: Analytical Models ............................................................................................................... 34 
6. OPERATIONAL RISK .........................................................................................35 
6.1 BIA, TSA, ASA, RSA ............................................................................................................................... 36 
Procedures: Operational Risk (BIA, TSA, ASA, RSA) .................................................................... 38 
6.2 AMA: Advanced Measurement Approach ......................................................................................... 39 
Procedures: Operational Risk (Advanced Measurement Approach)............................................ 41 
6.3 AMA: Basel OPCAR Convolution Model.......................................................................................... 42 
Procedures: Operational Risk (Basel OPCAR Convolution Model) ............................................ 45 

APPENDIX 1: CONVOLUTION & SIMULATION ...........................................46 


A1.1 Convolution of Two Uniforms .......................................................................................................... 47 
A1.2 Convolution of Twelve Uniforms ..................................................................................................... 48 
A1.3 Convolution of Multiple Exponentials ............................................................................................. 49 

APPENDIX 2: CONVOLUTION OF MULTIPLICATION OF FREQUENCY


AND SEVERITY DISTRIBUTIONS IN OPERATIONAL RISK CAPITAL
MODEL IN BASEL III ............................................................................................ 51 
Introduction ................................................................................................................................................ 51 
Problem with Basel OPCAR .................................................................................................................. 52 
Theory .......................................................................................................................................................... 53 
Empirical Results: Convolution versus Monte Carlo Risk Simulation for OPCAR ................. 56 
High Lambda and Low Lambda Limitations ..................................................................................... 59 
Caveats, Conclusions, and Recommendations ................................................................................... 60 

APPENDIX 3: PROBABILITY DISTRIBUTIONS .............................................62 


Discrete Distributions ..................................................................................................................................... 65 
Bernoulli or Yes/No Distribution......................................................................................................... 65 
Binomial Distribution ............................................................................................................................... 65 
Discrete Uniform....................................................................................................................................... 66 
Geometric Distribution............................................................................................................................ 66 
Hypergeometric Distribution ................................................................................................................. 67 
Negative Binomial Distribution ............................................................................................................. 68 
Pascal Distribution .................................................................................................................................... 69 
Poisson Distribution ................................................................................................................................. 70 
Continuous Distributions............................................................................................................................... 71 
Arcsine Distribution.................................................................................................................................. 71 
Beta Distribution ....................................................................................................................................... 71 
Beta 3 and Beta 4 Distributions ............................................................................................................. 72 
Cauchy Distribution, or Lorentzian or Breit-Wigner Distribution................................................ 72 
Chi-Square Distribution ........................................................................................................................... 73 
Cosine Distribution ................................................................................................................................... 73 
Double Log Distribution ......................................................................................................................... 74 
Erlang Distribution ................................................................................................................................... 75 
Exponential Distribution ......................................................................................................................... 75 
Exponential 2 Distribution ..................................................................................................................... 76 
Extreme Value Distribution, or Gumbel Distribution..................................................................... 76 
F Distribution, or Fisher-Snedecor Distribution ............................................................................... 77 
Gamma Distribution (Erlang Distribution) ........................................................................................ 77 
Laplace Distribution ................................................................................................................................. 78 
Logistic Distribution ................................................................................................................................. 79 
Lognormal Distribution ........................................................................................................................... 80 
Lognormal 3 Distribution ....................................................................................................................... 81 
Normal Distribution ................................................................................................................................. 81 
Parabolic Distribution .............................................................................................................................. 81 
Pareto Distribution.................................................................................................................................... 82 
Pearson V Distribution ............................................................................................................................ 83 
Pearson VI Distribution........................................................................................................................... 83 
PERT Distribution.................................................................................................................................... 84 
Power Distribution.................................................................................................................................... 85 
Power 3 Distribution ................................................................................................................................ 85 
Student’s t Distribution ............................................................................................................................ 86 
Triangular Distribution............................................................................................................................. 86 
Uniform Distribution ............................................................................................................................... 87 
Weibull Distribution (Rayleigh Distribution)...................................................................................... 88 
Weibull 3 Distribution .............................................................................................................................. 88 
1
C M O L S O F T W A R E U S E R M A N U A L

1. INTRODUCTION TO
CMOL SOFTWARE

T he Credit, Market, Operational, and Liquidity Risk (CMOL) software was developed by
Real Options Valuation, Inc. in collaboration with Risk Business Latin America. ROV
CMOL comes in 5 languages (English, Chinese, Portuguese, Russian, and Spanish) and
has several main analytical areas briefly described below. A wealth of resources is available to get
you started including Online Getting Started Videos, Getting Started Guides, Case Studies, White
Papers, and Sample Models.
The CMOL software was developed to perform a comprehensive analysis for banks based on
Basel II and Basel III requirements on credit, market, operational, and liquidity risks. CMOL
takes all of our advanced risk and decision analytical methodologies and incorporates them into
a simple-to-use and step-by-step integrated software application suite that is used by small to
midsize banks. It simplifies the risk-based Basel requirements and empowers a bank’s
stakeholders and decision makers with insights from powerful analytics using simple to interpret
results and reports.
Note that although we attempt to be thorough in this user manual, the manual is absolutely not
a substitute for the Training DVD, live training courses, and books written by the software’s
creator (e.g., Dr. Johnathan Mun’s Real Options Analysis, 2nd Edition, Wiley Finance, 2005;
Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Forecasting, and Optimization, 2nd
Edition, Wiley Finance, 2010). Please visit our website at www.realoptionsvaluation.com for
more information about these items.
It is often said that the Basel Committee Standards, formally called Capital Accords, constitute
the bible for banking regulators (Central Banks) everywhere. In addition to the Accords, the Basel
Committee has also framed 29 principles for effective banking supervision known as the Core
Principles for Effective Banking Supervision. The standards encompassed by the Capital Accord
and the Core Principles have become the source of banking regulation in every country in the
world. As is widely known, these standards have evolved from Basel I to Basel II and III,
reflecting the evolution of the financial industry (from Basel I to II) and the lessons from the
financial crisis of 2008 (from Basel II to III). The most noticeable financial regulation paradigm
changes captured and fostered by the Basel standards’ evolution are risk management and capital
allocation. These most important changes in the international standards, and, therefore, in
virtually every country´s financial regulatory framework, relate to the manner in which risks are
managed and capital is calculated. By the general definition, as stated in Core Principle 15, Risk
Management is the process to be used by banks to “identify, measure, evaluate, monitor, report
and control or mitigate all material risks on a timely basis and to assess the adequacy of their

1|P a g e
C M O L S O F T W A R E U S E R M A N U A L

capital and liquidity in relation to their risk profile.” This process has been presented as the
IMMM process: Identify, Measure, Monitor, and Mitigate each risk. In practice, the way to
manage risks, and, hence, comply with the new Basel regulations, is to introduce or enhance the
IMMM process for each material risk the financial institution faces. Along with the
aforementioned international standards, there are tools that facilitate the implementation or
enhancement of the IMMM processes. Briefly, these are Formal Policies, Key Risk Indicators,
Capital Models, and MIS/Reports.
This user manual looks at the practical tools in CMOL—quantitative models, Monte Carlo risk
simulations, credit models, and business statistics—utilized to model and quantify regulatory and
economic capital, measure and monitor key risk indicators, and report all the obtained data in a
clear and intuitive manner. It relates to the modeling and analysis of asset liability management,
credit risk, market risk, operational risk, and liquidity risk for banks or financial institutions,
allowing these firms to properly identify, assess, quantify, value, diversify, hedge, and generate
periodic regulatory reports for supervisory authorities and Central Banks on their credit, market,
and operational risk areas, as well as for internal risk audits, risk controls, and risk management
purposes.
In banking finance and financial services firms, economic capital is defined as the amount of risk
capital, assessed on a realistic basis based on actual historical data, the bank or firm requires to
cover the risks as a going concern, such as market risk, credit risk, liquidity risk, and operational
risk. It is the amount of money that is needed to ensure survival in a worst-case scenario. Financial
services regulators such as Central Banks, Bank of International Settlements, and other regulatory
commissions should then require banks to hold an amount of risk capital equal at least to its
economic capital times some holding multiple. Typically, economic capital is calculated by
determining the amount of capital that the firm needs to ensure that its realistic balance sheet
stays solvent over a certain time period with a prespecified probability (e.g., usually defined as
99.00%). Therefore, economic capital is often calculated with Value at Risk (VaR) type models.
In light of these International Standards, which are now formal regulations in virtually every
country in the world, we utilize a spectrum of basic and more complex approaches to generate
an economic capital model calculated on the formally defined risk drivers in each case and
providing for risk-sensitive capital results for each relevant risk. Additionally, for each risk,
through a set of basic information, a set of key risk indicators is generated and combined with
the capital model results to produce relevant risk reports. Since regulations still require many
instances of regulatory capital, such calculation is still provided along with Basel Standards as
another useful output of the designed tools. Finally, The Basel Committee differentiates credit,
market, and operational risks from the rest, defining these three as the most relevant in any given
financial institution. According to the Three Pillar design of Basel II, these are known as Pillar I
risks. Under Basel II and III, economic and regulatory capital can be unified for Pillar I risks. In
other words, for these three risks (credit, market and operational), economic capital models are
given by the Basel Accord as a way to generate some standardization of methodologies and
comparison among banks and countries.
For credit risk, the traditional approach for Basel I regulatory capital (still available as a basic
choice in Basel III) is to calculate 8% of outstanding loan volume, multiplied by a factor
depending of the type of asset treated (100% for uncollateralized loans, 50% for mortgages, 20%
for interbank, etc.). This approach, however, does not differentiate by risk within each category.
In order to create a more risk-sensitive approach, Basel II incorporated the main logic of portfolio
models, where capital is the amount required to cover unexpected losses. Unexpected losses, in
turn, are calculated as the residual given by the difference between the mean and the confidence
interval of a loss distribution function.

2|P a g e
C M O L S O F T W A R E U S E R M A N U A L

1.1 CMOL Modules


The CMOL software has the following modules:
 Credit Risk: Applies Basel II/III requirements on credit modeling (residential
mortgages, revolving credit, wholesale corporate and sovereign debt, and miscellaneous
credit), computes Regulatory Capital (RC), Risk-Weighted Assets (RWA), and
Economic Capital (EC), given inputs such as historical default data to compute
Probability of Default (PD), Loss Given Default (LGD), and Exposure at Default
(EAD).
 Market Risk: Computes gross Value at Risk (VaR) and internal simulated VaR with
various holding days and VaR percentiles.
 Operational Risk: All of Basel’s Operational Risk methods such as the Basic Indicator
Approach (BIA), The Standardized Approach (TSA), Alternate Standardized Approach
(ASA), Revised Standardized Approach (RSA), and Advanced Measurement Approach
(AMA) are supported in the software. Monte Carlo Risk Simulation methods are used
in concert with convolution of probability distributions of operational risk Severity and
Frequency to determine Expected Losses (EL), Unexpected Losses (UL), and
estimation of Basel’s OPCAR or Operational Capital at Risk values for the AMA
approach.
 Liquidity Risk: Asset Liability Management modeling approaches to compute Liquidity
Gap, Economic Value of Equity (EVE), and Net Income Margin (NIM) based on
interest rate risk and liquidity risk, with stress testing and scenario analysis.
 Analytical Models: Provides structural, time-series, portfolio, and credit models on
estimating PD, EAD, LGD, credit exposures, options-based asset valuation, volatility,
debt instrument valuation, Credit Conversion Factors (CCF), Loan Equivalence Factors
(LEQ), and a myriad of other models.
 Monte Carlo Risk Simulation: Access to 50 probability distributions including Extreme
Value Distributions (EVT) for estimating and simulating Severity of Operational Losses
(e.g., Fréchet, Generalized Pareto, Gumbel, Logistic, Log-Logistic, Lognormal, and
Weibull) and Frequency of Operational Risk Events (e.g., Poisson).

1.2 Installation Requirements and Procedures


To install the software, follow the on-screen instructions. The minimum requirements for this
software are:
 Pentium dual-core i3 processor or later (i5 dual-core recommended)
 Windows 7, Windows 8, or later
 Microsoft Excel 2010, 2013, or later
 500 MB free space
 2GB RAM minimum (4GB recommended)
 Administrative rights to install software

3|P a g e
C M O L S O F T W A R E U S E R M A N U A L

There is a default 10-day trial license file that comes with the software. To obtain a full corporate
license, please contact Real Options Valuation, Inc., at admin@realoptionsvaluation.com or call
+1 (925) 271-4438, or visit our website at www.realoptionsvaluation.com. On the website, click
on Download to obtain the latest software release, or click on the FAQ link to obtain any updated
information on licensing or installation issues and fixes.
Please note that the software works on a MAC computer as long as Parallels, Boot Camp, or
Virtual Machine is installed running Microsoft Windows.

1.3 Licensing
If you have installed the software and have purchased a full license to use the software, you will
need to e-mail us your Hardware Fingerprint so that we can generate a license file for you.

Procedures:  Start CMOL, click on the Menu icon, select Install License, and copy down and e-mail
Licensing your 8-digit alphanumeric Hardware Fingerprint and e-mail it to
admin@realoptionsvaluation.com (Figure 1).
 Once we have obtained this ID, a newly generated license will be e-mailed to you.
 Start CMOL again, click on the Menu icon, select Install License, and enter/paste in the
Name and Key exactly as you have received them via e-mail.

Figure 1 CMOL Menu Icon, Install License, and Hardware Fingerprint

4|P a g e
C M O L S O F T W A R E U S E R M A N U A L

2
2. CREDIT RISK

T 
he Credit Risk module is described in this chapter. The following credit risk
methodologies are supported in the CMOL software:

Applies Basel II/III Credit Risk models for Residential Mortgages, Revolving Credit,
Wholesale Corporate and Sovereign Debt, and Miscellaneous Credit.
 Uses historical default data to determine the historical Probability of Default (PD) or
enter your own PD estimates.
 Computes and returns the Regulatory Capital (RC), Risk-Weighted Assets (RWA), and
Economic Capital (EC), given inputs such as historical default data to compute
Probability of Default (PD), Loss Given Default (LGD), and Exposure at Default
(EAD).
 Generates multiple months of Credit Risk analysis and saves, backs up, and encrypts
your sensitive data using 256-bit encryption protocols.

2.1 CMOL’s Credit Risk Module


Figure 2 illustrates the PEAT utility’s ALM-CMOL module for Credit Risk—Economic
Regulatory Capital (ERC) Global Settings tab. This current analysis is performed on credit issues
such as loans, credit lines, and debt at the commercial, retail, or personal levels. To get started
with the utility, existing files can be opened or saved, or a default sample model can be retrieved
from the menu. However, to follow along, we recommend opening the default example (click
on the menu icon on the top right corner of the software, then select Load Example).
The number of categories of loans and credit types can be set as well as the loan or credit category
names, a Loss Given Default (LGD) value in percent, and the Basel credit type (residential
mortgages, revolving credit, other miscellaneous credit, or wholesale corporate and sovereign
debt). Each credit type has its required Basel III model that is public knowledge, and the software
uses the prescribed models per Basel regulations. Further, historical data can be manually entered
by the user into the utility or via existing databases and data files. Such data files may be large and,
hence, stored either in a single file or multiple data files where each file’s contents can be mapped
to the list of required variables (e.g., credit issue date, customer information, product type or
segment, Central Bank ratings, amount of the debt or loan, interest payment, principal payment,
last payment date, and other ancillary information the bank or financial services firm has access
to) for the analysis, and the successfully mapped connections are displayed. Additional
information such as the required VaR percentiles, average life of a commercial loan, and historical
data period on which to run the data files to obtain the Probability of Default (PD) is entered.

5|P a g e
C M O L S O F T W A R E U S E R M A N U A L

Next, the Exposure at Default (EAD) analysis periodicity is selected as is the date type and the
Central Bank ratings. Different Central Banks in different nations tend to have similar credit
ratings but the software allows for flexibility in choosing the relevant rating scheme (i.e., Level 1
may indicate on-time payment of an existing loan whereas Level 3 may indicate a late payment
of over 90 days, which, therefore, constitutes a default). All these inputs and settings can be saved
either as stand-alone settings and data or including the results. Users would enter a unique name
and notes and save the current settings (previously saved models and settings can be retrieved,
edited, or deleted; a new model can be created; or an existing model can be duplicated). The saved
models are listed and can be rearranged according to the user’s preference.

FIGURE 2 Credit Risk Settings.

2.2 Credit Economic and Regulatory Capital


Figure 3 illustrates the PEAT utility’s ALM-CMOL module for Credit Risk—Economic
Regulatory Capital’s Results tab. The results are shown in the grid if data files were loaded and
preprocessed and results were computed and presented here (the loading of data files is discussed
in connection with Figure 2). However, if data are to be manually entered (as previously presented
in Figure 2), then the grey areas in the data grid are available for manual user input, such as the
number of clients for a specific credit or debt category, the number of defaults for said categories
historically by period, and the exposure at default values (total amount of debt issued within the
total period). One can manually input the number of clients and number of credit and loan
defaults within specific annual time-period bands. The utility computes the percentage of defaults
(number of credit or loan defaults divided by number of clients within the specified time periods),
and the average percentage of default is the proxy used for the PD. If users have specific PD

6|P a g e
C M O L S O F T W A R E U S E R M A N U A L

rates to use, they can simply enter any number of clients and number of defaults as long as the
ratio is what the user wants as the PD input (e.g., a 1% PD means users can enter 100 clients and
1 as the number of defaults). The LGD can be user inputted in the global settings as a percentage
(LGD is defined as the percentage of losses of loans and debt that cannot be recovered when
they are in default). The EAD is the total loans amount within these time bands. These PD,
LGD, and EAD values can also be computed using structural models as is discussed later.
Expected Losses (EL) is the product of PD × LGD × EAD. Economic Capital (EC) is based
on Basel II and Basel III requirements and is a matter of public record. Risk Weighted Average
(RWA) is a regulatory requirement per Basel II and Basel III such as 12.5 × EC. The change in
Capital Adequacy Requirement (CAR @ 8%) is simply the ratio of the EC to EAD less the 8%
holding requirement. In other words, the Regulatory Capital (RC) is 8% of EAD.
The results obtained by the model allow for the construction of key risk indicators, comparing
basic regulatory capital requirements with these economic capital requirements. Additionally,
when coupled with the internal or external rating models (or credit scores) a profile of expected
and unexpected losses for each product or asset type can be constructed. This is also the basis
for the application of RAROC indicators, and the effective allocation of economic capital, in line
with the international standards and local regulatory requirements.

FIGURE 3 Economic Regulatory Capital (ERC).

7|P a g e
C M O L S O F T W A R E U S E R M A N U A L

Procedures: Credit  You can follow along with the examples in this user manual or create your own models
Risk Module from scratch. To follow along, in the CMOL software, click on the Menu icon and select
Load Example (see the droplist in Figure 1). Some sample data and models will be
loaded.
 Click on the Credit Risk (ERC) | Global Settings tab (Figure 2). Follow the steps and
make your selections and enter the relevant information. For example:
o Step 1: Increase or decrease the Number of Categories to show, enter the
Category Names, LGD %, and Basel Credit Type.
o Step 2: Select the Manual data input (first radio selection), the most typical
selection. The other selections can be used to set up data files for data
preprocessing.
o Step 3: Enter the Credit Value at Risk % (e.g., 99.90%), Average Maturity for
commercial loans (e.g., 1, 5, or 10), and the Years to analyze for historical
defaults (e.g., 2010 to 2013).
o Step 4: You can save the settings by entering a Name and clicking Save As.
Multiple models and settings can be saved (you can also decide if only the
settings and data or the settings, data, and results should be saved), allowing
you the flexibility to archive years of data in a single file. You can select an
existing saved model/setting and click on Edit to view its settings. Note that
the currently selected saved model will be the model computed in the next tab.
 Note that you can save the file (*.rovcml) using the Menu icon and selecting Save or
Save As. Note that this file can also be encrypted for additional security by selecting
Encrypt Data File from the menu. This *.rovcml file can hold all your data for multiple
years as well as settings from multiple tabs in a single portable file.
 When the settings are done, go to the Credit Risk (ERC) | Results tab (Figure 3). Enter
additional required data in the white cells (e.g., Number of Clients, Number of Defaults,
and Exposure at Default or EAD %). Note that the number of rows shown here
depends on your previous settings (e.g., 2010 to 2013 means there are 4 rows/years of
data) but there is only a single EAD input as this input applies to all years. The data grid
is also divided into multiple sections, corresponding to Step 1 of the Global Settings tab
as previously described.
o TIP: The total default percent is computed using the number of defaults
divided by the number of clients per year, and the Probability of Default (PD)
is the average of these default percentages. Alternatively, if you wish to set the
PD instead, simply enter a simple set of inputs. For example, to obtain a 5%
PD, enter 100 as the number of clients and 5 as number of defaults for all years.
o TIP: The computations for Expected Losses (EL), Economic Capital (EC),
Risk Weighted Assets (RWA), Delta CAR @8%, and Regulatory Capital, are
described in the next section.

 TIP: As a reminder, after completing and saving the file, you can optionally generate a
report of the calculations. The report can be generated in the Menu icon by selecting
Create Report. Excel will open and the inputs, settings, and computed results will be
pasted into Excel. While the report is being generated, remember to let the software
work; in other words, hands off the keyboard and mouse for a few seconds. Note that
the report may be empty if there are no computations for a specific tab.

8|P a g e
C M O L S O F T W A R E U S E R M A N U A L

2.3 Basel Credit Risk Models and Economic Capital


The CMOL software applies Basel II and Basel III requirements and definitions on regulatory
capital. For instance, the Economic Capital is defined as Value at Risk (i.e., the Total Risk
Amount) less any Expected Losses. There are 4 categories of equations based on the type of
credit and loans: 3 types of Retail Loans plus a category for Corporate Loans.

Retail Loans: 0.15


Residential
√ 99.9%
Mortgage Φ
Exposures √1


√ 99.9%
Φ
√1

12.5 12.5

Retail Loans: 0.04


Qualifying
Revolving Retail √ 99.9%
Φ
Exposures √1


√ 99.9%
Φ
√1

12.5 12.5

Retail Loans: 0.03 1 1


0.16 1
Other Retail 1 1
Exposures
√ 99.9%
Φ
√1


√ 99.9%
Φ
√1

12.5 12.5

9|P a g e
C M O L S O F T W A R E U S E R M A N U A L

Corporate Loans: 0.12 1 1


0.24 1
Corporate, 1 1
Sovereign, Bank,
and Corporate 0.11852 0.05478 ln
Loans

√ 99.9%
Φ
√1
1 2.5
1 1.5


√ 99.9%
Φ
√1
1 2.5
1 1.5

12.5 12.5

0,1 ,

0,1

10 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

3
3. MARKET RISK

M arket Risk, as a Pillar I risk, has requirements similar to those for economic regulatory
capital. The following methodologies are supported in the CMOL software:
 Uses historical data on asset holdings, foreign and domestic currency amounts,
to generate gross Value at Risk (VaR).
 Returns internal simulated VaR with various holding days and VaR percentiles.
 Returns Central Bank requirements for VaR computations based on asset holdings and
positions.
 Creates VaR charts and reports over time.
The particularities of market risk make it, possibly, the one that is easier to model and calculate,
and the one that has had more tool development so far. This is explained by the fact that the
main input for market risk measurement and modeling is market prices of assets or, more
practically, their volatilities. Therefore, there is great public availability of data, as opposed to the
other Pillar I risks that do not have daily prices publically available. As an example, there is no
public pricing of a particular group of retail loans issued by a private bank. Yet, modeling tools
for both market and credit risk are based on the same approach: utilizing past stylized data to
project future behavior under certain assumptions and within a confidence interval. Logically
then, market risk has a great bundle of information available and the potential to better test and
calibrate models. As presented, market risk models take on a Value at Risk (VAR) approach.

3.1 CMOL Market Risk Module


Figure 4 illustrates the PEAT utility’s ALM-CMOL module for Market Risk where Market Data
is entered. Users start by entering the global settings, such as the number of investment assets
and currency assets the bank has in its portfolio, that require further analysis; the total number of
historical data that will be used for analysis; and various VaR percentiles to run (e.g., 99.00% and
95.00%). In addition, the volatility method of choice (industry standard volatility or Risk Metrics
volatility methods) and the date type (mm/dd/yyyy or dd/mm/yyyy) are entered. The amount
invested (balance) of each asset and currency is entered and the historical data can be entered,
copy and pasted from another data source, or uploaded to the data grid, and the settings as well
as the historical data entered can be saved for future retrieval and further analysis in subsequent
subtabs.

11 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 4 Market Risk Data.


Figure 5 illustrates the computed results for the Market VaR. Based on the data entered in the interface
shown as Figure 4, the results are computed and presented in two separate grids: the VaR results and
asset positions and details. The computations can be triggered to be rerun or Updated, and the results
can be exported to an Excel report template if required. The results computed in the first grid are
based on user input market data. For instance, the VaR calculations are simply the Asset Position ×
Daily Volatility × Inverse Standard Normal Distribution of VaR Percentile × Square Root of the Horizon in Days.
In other words, we have:
%
%

% √ Φ

Therefore, the Gross VaR is simply the summation of all VaR values for all assets and foreign
exchange–denominated assets. In comparison, the Internal Historical Simulation VaR uses the
same calculation based on the historically simulated time-series of asset values. The historically
simulated time-series of asset values is obtained by the Asset’s Investment × Asset Pricet-1 × Period-
Specific Relative Returns – Asset’s Current Position. The Asset’s Current Position is simply the Investment
× Asset Pricet. From this simulated time series of asset flows, the (1 – X%) percentile asset value
is the VaR X%. Typically, X% is 99.00% or 95.00% and can be changed as required by the user
based on the regional- or country-specific regulatory agency’s statutes. This can be stated as:
%
1 %

12 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Procedures:  Proceed to the Market Risk | Market Data tab (Figure 4). Select or enter the following
Market Risk information:
o Number of Investment Assets to analyze. Each asset is set up as a column in
the data grid.
o Number of Currency Assets including domestic and foreign currencies.
o Number of Rows of historical data you have to analyze.
o Up to two relevant VaR Percentages: Typically these are 95.00%, 99.00%,
99.50%, or 99.90%.
o Volatility Methodology where the Standard Volatility is the default and most
typically used approach, computed using the annualized standard deviation of
the logarithms of relative returns of the assets over time, versus the Risk
Metrics EWMA or exponentially weighted moving average approach.
o Date Type can be USA style (mm/dd/yyyy) or Europe and Latin America style
(dd/mm/yyyy).
 Enter the Investment Amounts and Asset Names (name is optional but investment
amount is required) in the first two rows of the data grid headings (Figure 4). As an
illustration, if Investment Amounts are left empty or set as 0 (Figure 4’s Assets 5-9
investment header), then the results will return as null (Figure 5’s rows for Assets 5-9).
 Copy and paste your historical data into the data grid. If there are missing data points,
leave them as empty cells. Alternatively, you can enter in 0 (zero) for missing values, but
it will be simpler to leave them empty.
o TIP: Use CTRL+C in Excel or data files to copy the data, and selecting the
first cell in the data grid where appropriate, use CTRL+V to paste the data.
o TIP: The data grid will reject all alphanumeric entries (e.g., ABC123) and will
only accept zero, empty cells, or positive numerical values.
 Provide the dataset a Name and Save it. As usual, you can save multiple datasets and
edit, delete, rename, duplicate (Save As), and rearrange these saved datasets and settings
(using the up and down arrows). All saved datasets will be saved in a single *.rovcml file,
which can also be encrypted for added security.
 Proceed to the Market Risk | Value at Risk tab (Figure 5) to see your results. The
standard 1-Day, 5-Day, and 10-Day Holding Period Gross Value at Risk, Internal
Historical Simulation VaR results based on the percent VaR you entered previously are
also computed. These VaR values are separated by debt/bonds and currencies, as
required by Basel regulations. The individual assets’ daily volatility, weights, and VaR for
the different holding periods and user-specified VaR percentages are computed and
shown.
 Proceed to the Market Risk | Central Bank VaR tab (Figure 6). Enter the required VaR
Percentile, Horizon Days for the VaR, Number of Assets, and paste in the required
historical data in the data grid. Only white cells are user inputs in the data grid. Greyed
cells are computed results. Do not forget to also enter the Asset Type/Name and the
Central Bank’s Regulatory Volatility for that specific asset class on the first two rows of
the data grid’s headers. The Value at Risk results are automatically computed.

13 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

 Proceed to the Market Risk | Results Visual tab (Figure 7). Here you can select from
droplists the previously computed results to chart.
 TIP: As a reminder, after completing and saving the file, you can optionally generate a
report of the calculations. The report can be generated in the Menu icon by selecting
Create Report. Excel will open and the inputs, settings, and computed results will be
pasted into Excel. While the report is being generated, remember to let the software
work; in other words, hands off the keyboard and mouse for a few seconds. Note that
the report may be empty if there are no computations for a specific tab.

FIGURE 5 Market Value at Risk.

3.2 Central Bank Market Risk


Many countries issue regulations for market risk measurement and capital allocation, whereby
some standardized models are suggested or even imposed, in line with the Basel Standards. We
analyze such an example in Figure 6, where the regulatory model can be obtained by utilizing the
parameters given by the regulator (i.e., volatilities and holding periods for given common assets).
The structure of the tool allows for the comparison of regulatory, internal, and stressed scenarios,
giving the analysts a large array of results to better interpret risk measurement, capital allocation,
and future projections. The VaR computations are based on the same approach as previously
described, and the inputs, settings, and results can be saved for future retrieval. Figure 7 provides
a results visual of the VaR computations based on gross value and internal historical simulation
methods.

14 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 6 Market Central Bank VaR.

FIGURE 7 VaR Results Visual.

15 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

4
4. LIQUIDITY RISK (ALM)

L 
iquidity Risk is another key element in Basel II/III requirements. The following
methodologies are supported in the CMOL software:

Uses interest-sensitive Asset and Liability historical data in computing Asset Liability
Management (ALM) modeling.
 Computes Liquidity Gap, Economic Value of Equity (EVE), and Net Income Margin
(NIM) based on interest rate risks and liquidity risks.
 Stress Testing and Scenario Analysis are applied.
As with any other Basel-defined risk, KRIs are constructed based on the inputs and results, and
can be duly monitored and reported, in line with the IMMM process. Liquidity and interest rate
risk are usually managed together in a function called Asset and Liability Management (ALM).
These two risks are closely intertwined, since liquidity risk monitors the availability of liquid funds
to confront disbursement requirements (usually in three time horizons: immediate and intraday,
short-term structure, and long-term structure), while interest rate risk measures the impact of the
difference in maturities, or duration, for assets and liabilities.

4.1 ALM: Net Interest Margin and Economic Value of Equity


Figure 8 illustrates the PEAT utility’s ALM-CMOL module for Asset Liability Management—Interest
Rate Risk’s Input Assumptions and general Settings tab. This segment represents the analysis of ALM
computations, the practice of managing risks that arise due to mismatches between the maturities of
assets and liabilities. The ALM process is a mix of risk management and strategic planning for a bank
or financial institution. It is about offering solutions to mitigate or hedge the risks arising from the
interaction of assets and liabilities as well as the success in the process of maximizing assets to meet
complex liabilities such that it will help increase profitability. The current tab starts by obtaining, as
general inputs, the bank’s regulatory capital obtained earlier from the credit risk models. In addition,
the number of trading days in the calendar year of the analysis (e.g., typically between 250 and 255
days), the local currency’s name (e.g., U.S. Dollar or Argentinian Peso), the current period when the
analysis is performed and results reported to the regulatory agencies (e.g., January 2015), the number
of VaR percentiles to run (e.g., 99.00%), number of scenarios to run and their respective basis point
sensitivities (e.g., 100, 200, and 300 basis points, where every 100 basis points represent 1%), and
number of foreign currencies in the bank’s investment portfolio. Figure 9 further illustrates the PEAT
utility’s ALM-CMOL module for ALM specifically for Interest Rate Sensitive Assets and Liabilities
data where historical impacts of interest-rate sensitive assets and liabilities, as well as foreign currency–
denominated assets and liabilities are entered, copy and pasted, or uploaded from a database. Historical
Interest Rate data is uploaded (Figure 10) where the rows of periodic historical interest rates of local
and foreign currencies can be entered, copy and pasted, or uploaded from a database.

16 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 8 Asset Liability Management—Global Settings.

FIGURE 9 Asset Liability Management—Interest Rate Risk (Asset and Liability Data).

17 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 10 Asset Liability Management—Historical Interest Rates.

The most straightforward way to present ALM structures for liquidity and interest-rate risk
management is through the utilization of Gap charts. A Gap chart is simply the listing of all assets
and liabilities as affected by interest rate movements or liquidity movements, respectively, ordered
on time-defined buckets (i.e., days, weeks, months, or years). Typically, for interest rate risk there
are two main management approaches: a shorter-term structure analysis based on a more
accounting-side perspective, usually referred to as the NIM (Net Interest Margin) approach, and
a longer-term structure analysis based on a more economic-side perspective, usually referred to
as the EVE (Economic Value of Equity) approach. The NIM approach rests on the logic that
the natural mismatch between assets and liabilities has an impact on earnings, through the net
interest margin, and such impact can be measured through given deltas (variations) in the
referential market interest rate. In this case, the impact is measured through the GAP chart, as
applied to balance sheet items of the asset and liability sides, respectively. So, on the one hand, a
natural NIM approach would deliver a balance sheet impact on earnings, based on the structure
and maturity of assets and liabilities, when subjected to a 100 basis point increase in the referential
market interest rate risk. Because the Gap analysis defines which side of the balance sheet (assets
or liabilities) the cash flow is on, as well as accounting for each time bucket, analysts can define
which sign would apply to earnings should interest rates go up or down. Therefore, the
combination of these two tools allows for the establishment of different business and stress
scenarios and, hence, the determination of targets and limits on the structure and duration of
assets and liabilities. The EVE approach, on the other hand, is a long-term evaluation tool, by
which analysts can determine the impact on capital (or equity, defined as assets minus liabilities)
of referential market interest rate valuations, as it affects the net present value and duration of the
described balance sheet items. By this approach, the system can calculate the deltas in durations

18 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

and in net present value of assets, liabilities, and equity, as measured in the Gap charts. Therefore,
such variations allow for the construction of scenarios for the different impacts on equity value
and duration of changes in the referential market interest rate. These results are then fed into
different KRIs for monitoring, defining, and calibrating targets and limits, in line with the IMMM
risk management structure.
Figure 11 illustrates the Gap Analysis results of Interest Rate Risk. The results are shown in
different grids for each local currency and foreign currency. Gap Analysis is, of course, one of
the most common ways of measuring liquidity position and represents the foundation for
scenario analysis and stress testing, which will be executed in subsequent tabs. The Gap Analysis
results are from user inputs in the input assumptions tab. The results are presented for the user
again for validation and in a more user-friendly tabular format. The Economic Value of Equity
results (Figure 12) are based on interest-rate risk computations in previous tabs. The impact on
regulatory capital as denoted by VaR levels on local and foreign currencies is computed, as are
the duration gaps and basis point scenarios affecting the cash flows of local and foreign
currencies.

FIGURE 11 Asset Liability Management—Interest Rate Risk: Gap Analysis.

19 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 12 Asset Liability Management—Economic Value of Equity.

Figure 13 illustrates the Net Income Margin (NIM) Input Assumptions requirements based on
interest-rate risk analysis. The highlighted cells in the data grid represent user input requirements
for computing the NIM model. The Economic Value of Equity and Gap Analysis calculations
described above are for longer-term interest-rate risk analysis, whereas the NIM approach is for
shorter-term (typically 12 months) analysis of liquidity and interest-rate risk effects on assets and
liabilities.
In NIM calculations, we use

Δ
Δ % 10000
ΣΔ

Figure 14 shows the results of the gap analysis where we proceed using:
Δ %

20 | P a g e
C M O L S O F T W A R E U S E R M A N U A L



12



1
#


1
1

FIGURE 13 Net Income Margin (NIM): Input Assumptions and Model.

21 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 14 Net Income Margin (NIM): Results.

Procedures: ALM  Proceed to the Asset Liability Management | Interest Rate Risk | Input Assumptions |
Interest Rate Settings tab (Figure 8). Here, enter the required information including the Bank’s
Liquidity Risk Regulatory Capital (either as an assumption or based on the calculations in the previous
Credit Risk module), Number of Trading Days/Year (e.g., typically between 250 and
255), Local Currency Name (e.g., Peso or USD), Analysis Period (e.g., Jan 2014),
Number of VaRs to run and their respective VaR % values (e.g., 90.0%, 95.0%, 99.0%),
Number of Scenarios to run and their respective Basis Point Sensitivity values (e.g., 100
and 200 basis points), and, finally, the Number of Foreign Currencies to analyze and
their Foreign Currency Names. At this point, it is best to enter a Name for the settings
and inputs, and Save such that additional data that will be entered in the next few tabs
can be saved to this same dataset.
 Proceed to the Asset Liability Management | Interest Rate Risk | Input Assumptions |
Rate Sensitive Assets and Liabilities tab (Figure 9). Here, paste your historical data for
local and foreign currencies’ Asset and Liabilities. These variables are clearly labeled in
the header for your convenience. Note that the rows represent Time Bands and proceed,
per Basel regulations, from 0, 1, 2, 3,…, 24 months, and then skip count every 12
months, proceeding from 24, 36, 48,…, 360 to represent a 30-year time-span, with the
first 2 years represented monthly, and all subsequent years represented annually. As
usual, leave missing data as empty cells or enter in 0 in their stead. Do not forget to click
the Save button to save the entered data as part of the current model.
 Proceed to the Asset Liability Management | Interest Rate Risk | Input Assumptions |
Historical Interest Rates tab (Figure 10). Here, you can paste in historical interest rates

22 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

for the single Local Currency and each of the multiple Foreign Currencies. Remember
to click Save when done, to update and save the current model. The convention is to
paste oldest data first (i.e., following a time-series chronology).
o TIP: Remember that the Save buttons refer to updating the current saved
model that is being edited (Figure 8). You can save multiple models in this list,
of course, and all saved models, settings, data, inputs, assumptions, are saved
inside a single *.rovcml file.
o TIP: Remember that when everything is said and done, or while performing
critical tasks, to click on the Menu icon and click Save to save the global
*.rovcml file. Remember that this menu item’s save function saves the *.rovcml
file that contains multiple saved models and datasets in various tabs. Therefore,
clicking on the Save or Save As buttons in these various tabs only performs
local saves pertinent to the dataset or settings you are creating, and not the
entire file.
 Proceed to the Asset Liability Management | Interest Rate Risk | Gap Analysis tab
(Figure 11). Here, the currently edited saved dataset model will be presented, following
your input assumptions and required time bands, for the local currency as well as all
foreign currencies.
 Proceed to the Asset Liability Management | Interest Rate Risk | Economic Value of
Equity tab (Figure 12). Here, the EVE results are presented, showing the longer-term
analysis of VaR at various percentiles for each of the Local and Foreign Currencies. The
duration gap is also shown, together with the various Scenario analysis results based on
the Basis Points previously entered.
 Proceed to the Asset Liability Management | Interest Rate Risk | Net Income Margin
| Input Assumptions tab (Figure 13). Here, shorter-term analysis is performed. Enter
the number of rows required for Assets and Liabilities’ subcomponents for the local
currency and foreign currency. Then proceed to complete the monthly cash flows in the
data grid as well as entering the labels/names for these cash flow line items. A sample
set of inputs is shown in Figure 13. All white cells are user input cells, including the
Starting Balances and Monthly Cash Flows for the relevant 12 months of historical data
for the specific year to analyze.
 Proceed to the Asset Liability Management | Interest Rate Risk | Net Income Margin
| NIM Results tab (Figure 14). Here, the short-term NIM and Net Interest Income
impacts based on various Basis Point Changes and time periods are analyzed and
presented.

23 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

4.2 ALM: Liquidity Risk with Scenario Analysis and Stress Testing
Figure 15 illustrates the PEAT utility’s ALM-CMOL module for Asset Liability Management—
Liquidity Risk Input Assumptions tab on the historical monthly balances of interest-rate sensitive
assets and liabilities. The typical time horizon is monthly for one year (12 months) where the
various assets such as liquid assets (e.g., cash), bonds, and loans are listed, as well as other asset
receivables. On the liabilities side, regular short-term deposits and timed deposits are listed,
separated by private versus public sectors, as well as other payable liabilities (e.g., interest
payments and operations). Adjustments can also be made to account for rounding issues and
accounting issues that may affect the asset and liability levels (e.g., contingency cash levels,
overnight deposits, etc.). The Liquidity Risk’s Scenario Analysis and Stress Testing settings can
be set up to test interest-rate sensitive assets and liabilities. Multiple scenarios can be saved for
future retrieval and analysis in subsequent tabs as each saved model constitutes a stand-alone
scenario to test. Scenario analysis typically tests both fluctuations in assets and liabilities and their
impacts on the portfolio’s ALM balance, whereas stress testing typically tests the fluctuations on
liabilities (e.g., runs on banks, economic downturns where deposits are stressed to the lower limit)
where the stressed limits can be entered as values or percentage change from the base case.
Multiple stress tests can be saved for future retrieval and analysis in subsequent tabs as each saved
model constitutes a stand-alone stress test. Figure 16 illustrates the scenario testing inputs and
Figure 17 illustrates the stress-testing input assumptions. Figure 18 illustrates the Liquidity Risk’s
Gap Analysis results. The data grid shows the results based on all the previously saved scenarios
and stress-test conditions. Gap is, of course, calculated as the difference between Monthly Assets
and Liabilities, accounting for any Contingency Credit Lines. Figure 19 shows the charts of the
various computed results.

FIGURE 15 Asset Liability Management: Liquidity Risk Model and Assumptions.

24 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 16 Asset Liability Management—Scenario Analysis.

FIGURE 17 Asset Liability Management—Stress Testing.

25 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 18 Asset Liability Management—Liquidity Risk: Gap Analysis.

FIGURE 19 Asset Liability Management—Charts.

26 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Procedures: ALM  Proceed to the Asset Liability Management | Liquidity Risk | Input Assumptions tab
Liquidity Risk with (Figure 15). Start by setting up the number of subcategories required for Liquidity,
Scenario and Bonds, and Loans. These subcategories will show up as additional input rows in the data
Sensitivity grid. Also remember to enter the Management Limit and Contingency Limits required.
Analysis Then proceed to enter or paste in the Monthly Assets and Liabilities data, starting with
the Names or labels for each row, Starting Balances, and Monthly Cash Flows. All white
cells are user input cells, and in places where there are no data, simply leave them empty
or enter zeros. As usual, you can enter as many datasets as required, but remember to
Save each of these datasets. The current and active dataset will be the dataset analyzed
in the subsequent tabs.
 Proceed to the Asset Liability Management | Liquidity Risk | Scenario Analysis tab
(Figure 16). Start by selecting one of the Saved Datasets to analyze. You can decide to
enter your scenarios as % Changes or Actual Values to run and test, from the radio-
button selection on the right. Based on the saved dataset selected, the Names of the
Asset and Liability subcomponents will be updated (these are based on the names
previously entered in the input assumptions tab). Then, proceed to enter/paste the
required scenario data. Remember to Save the scenario. Continue by adding additional
scenarios if required. Remember to select the correct dataset each time before entering
the scenario values to test.
o TIP: Scenario Analysis is used to run different scenarios and conditions where
the Private Sector Assets/Liabilities such as Regular Deposits and Time
Deposits change. Gap Analysis is then run based on the original input
assumptions and under each of these scenarios to determine Monthly Gaps.
 Proceed to the Asset Liability Management | Liquidity Risk | Stress Testing tab (Figure
17). Start by selecting one of the Saved Datasets to stress test, decide if you wish to enter
your stress tests as % Changes or Actual Values to run and test, and enter the relevant
inputs. Remember to Save the stress test. Continue by adding additional stress tests if
required. Remember to select the correct dataset each time before entering the values to
stress test.
o TIP: Stress Test is used to run different scenarios and conditions where the
Liabilities such as Regular Deposits and Time Deposits dramatically changed
or stressed. Gap Analysis is then run based on the original input assumptions
and under each of these stressed situations to determine Monthly Gaps.
 Proceed to the Asset Liability Management | Liquidity Risk | Gap Analysis tab (Figure
18). This tab shows the monthly Effective Gap, Cumulative Gap, and Liquidity
Indicators (Gap to Deposit ratios) for an entire year.
 Proceed to the Asset Liability Management | Liquidity Risk | Charts tab (Figure 19).
The results described above are presented in a time-series chart for users to better
visualize the fluctuations.

27 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

5
5. ANALYTICAL MODELS

A analytical Models modules contain models on estimating and valuing PD, EAD, LGD,
Volatility, Credit Exposures, Options-based Asset Valuation, Debt Valuation, Credit
Conversion Factors (CCF), Loan Equivalence Factors (LEQ), Options Valuation,
Hedging Ratios, and multiple other models. In Basel II/III, the regulations specifically state that
all Over the Counter (OTC) options, options-embedded instruments, and other exotic options
need to also be valued and accounted for. This requirement is why CMOL has devoted an entire
module to modeling and valuing these exotic nonlinear instruments. The module is divided into
four categories depending on their required inputs and structure of the model. In other words,
you might see analytical types like Probability of Default or Volatility traversing multiple tabs or
analytical segments.

5.1 Credit and Market Risk Analytical Models


Figure 20 illustrates the Analytical Models tab with input assumptions and results. This analytical
models segment is divided into Structural, Time-Series, Portfolio, and Analytics models. The
current figure shows the Structural models tab where the computed models pertain to credit risk–
related model analysis categories such as PD, EAD, LGD, and Volatility calculations. Under each
category, specific models can be selected to run. Selected models are briefly described and users
can select the number of model repetitions to run and the decimal precision levels of the results.
The data grid in the Computations tab shows the area in which users would enter the relevant
inputs into the selected model and the results would be computed. As usual, selected models,
inputs, and settings can be saved for future retrieval and analysis.

5.2 Credit Structural Models


Figure 20 illustrates the Structural Analytical Models tab with visual chart results. The results
computed are displayed as various visual charts such as bar charts, control charts, Pareto charts,
and time-series charts. The following analytical types and models are supported in the software:
 Exposure at Default (EAD):
o Credit Risk Plus Average Defaults
o Credit Risk Plus Percentile Defaults
o Retail EAD using Credit Conversion Factor
o Revolving Credit LEQ, CCF, EADF

28 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

 Loss Given Default (LGD):


o Publicly Traded Firms
 Probability of Default (PD):
o PD using Market Comparables
o PD using Bond Yields and Spreads
 Volatility
o Implied Volatility Call
o Implied Volatility Put

5.3 Credit Time-Series Models


Figure 21 illustrates the Time-Series Analytical Models tab with input assumptions and results.
The analysis category and model type are first chosen where a short description explains what
the selected model does, and users can then select the number of models to replicate as well as
decimal precision settings. Input data and assumptions are entered in the data grid provided
(additional inputs can also be entered if required), and the results are computed and shown. As
usual, selected models, inputs, and settings can be saved for future retrieval and analysis. The
following analytical types and models are supported in the software:
 Probability of Default (PD)
o PD on Individuals and Retail (Logit)
o Limited Dependent Variables (Probit)
o Limited Dependent Variables (Tobit)
 Volatility
o Historical Volatility
o GARCH Forecast Volatility

5.4 Credit Portfolio Models


Figure 22 illustrates the Portfolio Analytical Models tab with input assumptions and results. The
analysis category and model type are first chosen where a short description explains what the
selected model does, and users can then select the number of models to replicate as well as
decimal precision settings. Input data and assumptions are entered in the data grid provided
(additional inputs such as a correlation matrix can also be entered if required), and the results are
computed and shown. The following analytical types and models are supported in the software:
 Bond-Related Options, Pricing and Yields
o Bond Price (Discrete Discounting)
o Bond Price (Continuous Discounting)
o Bond Convexity YTM (Continuous Discounting)
o Bond Convexity YTM (Discrete Discounting)
o Bond Convexity (Continuous Discounting)

29 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

o Bond Convexity (Discrete Discounting)


o Bond Duration (Continuous Discounting)
o Bond Duration (Discrete Discounting)
o Bond Macaulay Duration
o Bond Modified Duration
 Value at Risk (VaR)
o Static Covariance Method
o Value at Risk (Options)
o Portfolio Returns
o Portfolio Risk

5.5 Credit Models


Additional models are available in the Credit Models tab (Figure 23) with input assumptions and
results. The analysis category and model type are first chosen, and input data and assumptions
are entered in the required inputs area (if required, users can Load Example inputs and use these
as a basis for building their models), and the results are computed and shown. Scenario tables
and charts can be created by entering the From, To, and Step Size parameters, where the
computed scenarios will be returned as a data grid and visual chart. As usual, selected models,
inputs, and settings can be saved for future retrieval and analysis. The following analytical types
and models are supported in the software:
 Basic Options Models
o Generalized Black–Scholes Call and Generalized Black–Scholes Put
o Closed-Form American Call and Closed-Form American Put
o Binomial American Call and Binomial American Put
o Binomial European Call and Binomial European Put
o Warrants Diluted Valuation
 Bond-Related Options, Pricing and Yields
o Interest Caplet and Interest Floorlet
o Convertible Bond (American) and Convertible Bond (European)
o Bond YTM using Continuous Discounting and Discrete Discounting
o Bond Options with Call and Put using Hull–White Models
o Bond Price and Bond Yield using Cox–Ingersoll–Ross Model
o Bond Price and Bond Yield using Vasicek Model
 Delta-Gamma Hedging
o Delta Hedges (Options Bought, Options Sold, Money Borrowed, and Shares
Purchased in the Hedge)

30 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

o Delta-Gamma Hedges (Options Bought, Options Sold, Money Borrowed, and


Shares Purchased in the Hedge)
 Economic Capital
o Retail: Residential Mortgages
o Retail: Revolving Credit
o Retail: Other Credit
o Wholesale, Corporate, Sovereign, Bank
 Exotic Options and Derivatives
o Exotic Options (Americans, Asians, Bermudans, Europeans)
o Over 150 Exotic Options with the following flavors:
 Barriers
 Binaries
 Cash
 Chooser
 Commodity
 Currency
 Equity
 Fixed Strikes
 Flexible Strikes
 Foreign Exchange
 Futures
 Gaps
 Perpetual
 Stock Index
 Swaps
 Two Asset
 Writer-Extensible
 Forecasting Extrapolation and Interpolation
o Yield Curve (Bliss)
o Yield Curve (Nelson–Siegel)
 Put-Call Parity and Option Sensitivity
o 6 Types of Put-Call Parity Models based on Standard Options, Futures, and
Currency Options
o Put and Call Sensitivities including Delta, Gamma, Rho, Theta, and Vega

31 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 20 Structural Credit Risk Analytical Models.

FIGURE 21 Time-Series Credit and Market Analytical Models.

32 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 22 Credit Portfolio Analytical Models.

FIGURE 23 Credit Models.

33 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Procedures:  Proceed to the Analytical Models | Credit Structural tab (Figure 20).
Analytical Models
o Follow the steps. For instance, start by selecting an Analysis type (e.g., EAD,
LGD, PD, Volatility), and depending on the type selected, a list of models will
appear (e.g., in the EAD model type, the software supports models such as
Retail EAD using Credit Conversion Factor (CCF), Revolving Credit Loan
Equivalence LEQ, CCF, and EAD Factors or EADF). Select one of the
Models to run.
o Based on the selected model, enter the required input assumptions and click
Compute to obtain the results.
o Enter a Name and Save the model and its input assumptions.
o TIP: Structural Models imply that the input parameters are single values, and
the solution is closed form. For instance, A + B = C is a structural model.
Enter inputs for A and B, and the resulting C will be computed. You may have
multiple combinations of A and B, whereupon multiple combinations of C will
be computed. Figure 20’s example shows the LEQ, CCF, EADF model with
4 different sets of inputs.
 Proceed to the Analytical Models | Credit (Time Series) tab (Figure 21). Follow the same
steps as described above, first selecting the Analysis type followed by the Model, then
enter the required inputs. Save the model and its assumptions if required.
o TIP: Time Series Models imply that at least one of the required inputs is a set
of time-series values or multiple inputs arranged in time, while other inputs
may be single-point estimates or also time-series, and the solution is closed
form. Figure 21’s example shows the GARCH volatility model, where the asset
price or stock price is a time series of historical data, arranged chronologically.
 Proceed to the Analytical Models | Credit (Portfolio) tab (Figure 22). Follow the same
steps as described above, first selecting the Analysis type followed by the Model, then
enter the required inputs. Save the model and its assumptions if required.
o TIP: Portfolio Models imply that one or more of the required inputs is a set of
time-series values or cross-sectional values with multiple inputs, and these
variables may or may not be correlated, and the solution is closed form. Figure
22’s example shows the bond duration model with multiple time-series inputs.
You can try selecting the Value at Risk (VaR) category and see that the time-
series inputs also require a correlation matrix (bottom right of the screen).
 Proceed to the Analytical Models | Credit (Models) tab (Figure 23). Follow the same
steps as described above, first selecting the Analysis type followed by the Model, then
enter the required inputs. Save the model and its assumptions if required.
o TIP: Credit Models imply that all inputs are single-point estimates and the
solution is closed form. In addition, users can add From and To values to
generate a two-dimensional scenario table where one or two of the inputs can
be perturbed and changed within some prespecified range and the results are
shown as a table and chart. Figure 23’s example shows the bond duration
model with multiple time-series inputs. You can try selecting the Value at Risk
(VaR) category and see that the time-series inputs also require a correlation
matrix (bottom right of the screen).

34 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

6
6. OPERATIONAL RISK

O 
perational Risk module is discussed in this chapter. The following methodologies are
supported in the CMOL software:

Basic Indicator Approach (BIA) using gross income and an alpha multiplier to estimate
required Capital Charge.
 The Standardized Approach (TSA) using gross income on 8 separate business lines with
their respective beta coefficients to compute the weighted average Capital Charge.
 Alternate Standardized Approach (ASA) using a mixture of gross income as well as
revenues from commercial and retail advances and loans to determine the required
Capital Charge.
 Revised Standardized Approach (RSA) using income-based Business Indicators (BI)
from three subcomponents of the bank’s overall businesses to obtain the Capital
Charge, including income and expenses based on Interest, Financial, and Services
components.
The Advanced Measurement Approach (AMA) is also supported in the software. Monte Carlo
Risk Simulation methods are used in concert with convolution of probability distributions of
operational risk Severity and Frequency to determine Expected Losses (EL), Unexpected Losses
(UL), and estimation of Basel’s OPCAR or Operational Capital at Risk values for the AMA
approach.
The case of operational risk is undoubtedly the most difficult to measure and model. The
opposite of market risk, by its definition, operational risk data is not only scarce, but biased,
unstable, and unchecked in the sense that the most relevant operational risk events do not come
identified in the balance sheet of any financial institution. Since the modeling approach is still
based on VaR logic, whereby the model utilizes past empirical data to project expected results,
modeling operational risk is a very challenging task. As stated, market risk offers daily, publicly
audited information to be used and modeled. Conversely, operational risk events are, in most
cases, not public, not identified in the general ledger, and, in many instances, not identified at all.
But the utmost difficulty comes from the proper definition of operational risk. Even if we
managed to go about the impossible task of identifying each and every operational risk event of
the past five years, we would still have very incomplete information. The definition of operational
risk entails events generated by failure in people, processes, systems, and external events. With
market risk, asset prices can either go up or down, or stay unchanged. With operational risk, an
unknown event that has never occurred before can take place in the analysis period and materially
affect operations even without it being an extreme tail event. So the logic of utilizing similar
approaches for such different information availability and behavior requires very careful

35 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

definitions and assumptions. With this logic in mind, the Basel Committee has defined that in
order to model operational risk properly, banks need to have four sources of operational risk
data: internal losses, external losses, business environment and internal control factors, and
stressed scenarios. These are known as the four elements of operational risk, and the Basel
Committee recommends that they are taken into account when modeling. For smaller banks, and
smaller countries, this recommendation poses a definitive challenge, because many times these
elements are not developed enough, or not present at all. In this light, most banks have resorted
to just using internal data to model operational risk. This approach comes with some
shortcomings and more assumptions, and should be taken as an initial step that considers the
later development of the other elements as they become available. The example shown in Figure
24 looks at the modeling of internal losses as a simplified approach usually undertaken by smaller
institutions. Since operational risk information is scarce and biased, it is necessary to “complete”
the loss distributions with randomly generated data. The most common approach for the task is
the use of Monte Carlo risk simulations (Figures 24–31) that allow for the inclusion of more
stable data and for the fitting of the distributions into predefined density functions.

6.1 BIA, TSA, ASA, RSA


Basel II and Basel III regulations allow for the use of multiple approaches when it comes to
computing capital charge on operational risk, defined by the Basel Committee as losses resulting
from inadequate or failed internal processes, people, and systems or from external events, which
includes legal risk, but excludes any strategic and reputational risks.

 Basic Indicator Approach (BIA) uses positive Gross Income of the last 3 years applied
to an Alpha multiplier.
 The Standardized Approach (TSA) uses positive Gross Income of 8 distinct business
lines with its own Beta risk-weighted coefficients.
 Alternate Standardized Approach (ASA) is based on the TSA method and uses Gross
Income but applies Total Loans and Advances for the Retail and Commercial business
lines, adjusted by a multiplier, prior to using the same TSA beta risk-weighted
coefficients.
 Revised Standardized Approach (RSA) uses Income and Expenses as proxy variables
to obtain the Business Indicator required in computing the risk capital charge.
 Advanced Measurement Approach (AMA) is open-ended in that individual banks can
utilize their own approaches subject to regulatory approval. The typical approach, and
the same method used in the ALM-CMOL software application, is to use historical loss
data, perform probability distribution fitting on the frequency and severity of losses,
which is then convoluted through Monte Carlo Risk Simulation to obtain probability
distributions of future expected losses. The tail-event VaR results can be obtained
directly from the simulated distributions.

Figure 24 illustrates the BIA, TSA, ASA, and RSA methods as prescribed in Basel II/III. The
BIA uses total annual gross income for the last 3 years of the bank and multiplies it with an alpha
coefficient (15%) to obtain the capital charge. Only positive gross income amounts are used. This
is the simplest method and does not require prior regulatory approval. In the TSA method, the
bank is divided into 8 business lines (corporate finance, trading and sales, retail banking,

36 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

commercial banking, payment and settlement, agency services, asset management, and retail
brokerage) and each business line’s positive total annual gross income values for the last 3 years
are used, and each business line has its own beta coefficient multiplier. These beta values are
proxies based on industry-wide relationships between operational risk loss experience for each
business line and aggregate gross income levels. The total capital charge based on the TSA is
simply the sum of the weighted average of these business lines for the last 3 years. The ASA is
similar to the TSA except that the retail banking and commercial banking business lines use total
loans and advances instead of using annual total gross income. These total loans and advances
are first multiplied by a 3.50% factor prior to being beta-weighted, averaged, and summed. The
ASA is also useful in situations where the bank has extremely high or low net interest margins
(NIM), whereby the gross income for the retail and commercial business lines are replaced with
an asset-based proxy (total loans and advances multiplied by the 3.50% factor). In addition, within
the ASA approach, the 6 business lines can be aggregated into a single business line as long as it
is multiplied by the highest beta coefficient (18%), and the 2 remaining loans and advances (retail
and commercial business lines) can be aggregated and multiplied by the 15% beta coefficient. In
other words, when using the ALM-CMOL software, you can aggregate the 6 business lines and
enter it as a single-row entry in Corporate Finance, which has an 18% multiplier, and the 2 loans
and advances business lines can be aggregated as the Commercial business line, which has a 15%
multiplier.
The main issue with BIA, TSA, and ASA methods is that on average, these methods are
undercalibrated, especially for large and complex banks. For instance, these three methods
assume that operational risk exposure increases linearly and proportionally with gross income or
revenue. This assumption is invalid because certain banks may experience a decline in gross
income due to systemic or bank-specific events that may include losses from operational risk
events. In such situations, a falling gross income should be commensurate with a higher
operational capital requirement, not a lower capital charge. Therefore, the Basel Committee has
allowed the inclusion of a revised method, the RSA. Instead of using gross income, the RSA uses
both income and expenditures from multiple sources, as shown in Figure 24. The RSA uses
inputs from an interest component (interest income less interest expense), a services component
(sum of fee income, fee expense, other operating income, and other operating expense), and a
financial component (sum of the absolute value of net profit and losses on the trading book, and
the absolute value of net profit and losses on the banking book). The calculation of capital charge
is based on the calculation of a Business Indicator (BI), where the BI is the sum of the absolute
values of these three components (thereby avoiding any counterintuitive results based on negative
contributions from any component). The purpose of a BI calculation is to promote simplicity
and comparability using a single indicator for operational risk exposure that is sensitive to the
bank’s business size and business volume, rather than static business line coefficients regardless
of the bank’s size and volume. Using the computed BI, the risk capital charge is determined from
5 predefined buckets from Basel II/III, increasing in value from 10% to 30%, depending on the
size of the BI (ranging from €0 to €30 billion). These Basel predefined buckets are denoted in
thousands of Euros, with each bucket having its own weighted beta coefficients. Finally, the risk
capital charge is computed based on a marginal incremental or layered approach (rather than a
full cliff-effect when banks migrate from one bucket to another) using these buckets.

37 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 24 Basel II/III BIA, TSA, ASA, and RSA Methods.

Procedures:  Proceed to the Operational Risk | Basel OPRISK (BIA, TSA, ASA, RSA) tab (Figure
Operational Risk 24). There are 4 sections in this tab. Use the section(s) you require and remember to
(BIA, TSA, ASA, Save the input assumptions. As usual, multiple models can be saved for archiving and
RSA) retrieval later. All white cells are inputs and only positive values are allowed in the first
three sections, whereas zero and negative values are allowed in the fourth section.
o Basic Indicator Approach (BIA). Enter Annual Gross Income for the last three
years to obtain the computed Capital Charge via the BIA model with a static
15% Alpha multiplier. You can enter values in any currency and in any
denomination (units, hundreds, thousands, millions).
o The Standardized Approach (TSA). Enter Annual Gross Income for the last
three years for each of the 8 Business Lines (BL) to obtain the computed
Capital Charge via the TSA model with variable Beta coefficients based on BL.
You can enter values in any currency and in any denomination (units, hundreds,
thousands, millions).
o Alternate Standard Approach (ASA). Enter Annual Gross Income for the last
three years for each of the 6 Business Lines (BL) and Total Loans and
Advances values for Retail and Commercial BL to obtain the computed Capital
Charge via the TSA model with variable Beta coefficients based on BL and the
Loans and Advanced Multiplier. You can enter values in any currency and in
any denomination (units, hundreds, thousands, millions).

38 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

o Revised Standard Approach (RSA). You must enter values converted into
Thousands of Euros (€’000) in this segment because the Basel II/III
requirements provides risk buckets that are denominated in thousands of
Euros. Instead of using gross income, the RSA uses both income and
expenditures from multiple sources. The RSA uses inputs from the following:
 Interest component. This is interest income less interest expense.
 Services component. This is the sum of fee income, fee expense, other
operating income, and other operating expense.
 Financial component. This is the sum of the absolute value of net
profit and losses on the trading book, and the absolute value of net
profit and losses on the banking book.

6.2 AMA: Advanced Measurement Approach


Figures 24–31 illustrate the Operational Risk Loss Distribution analysis when applying the AMA
method. Users start at the Loss Data tab where historical loss data can be entered or pasted into
the data grid. Variables include losses in the past pertaining to operational risks, segmentation by
divisions and departments, business lines, dates of losses, risk categories, and so on. Users then
activate the controls to select how the loss data variables are to be segmented (e.g., by risk
categories and risk types and business lines), the number of simulation trials to run, and seed
values to apply in the simulation if required, all by selecting the relevant variable columns. The
distributional fitting routines can also be selected as required. Then the analysis can be run and
distributions fitted to the data. As usual, the model settings and data can be saved.

FIGURE 25 Operational Risk data in Advanced Measurement Approach (AMA).

39 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Figure 26 illustrates the Operational Risk—Fitted Loss Distribution subtab. Users start by
selecting the fitting segments for setting the various risk category and business line segments, and,
based on the selected segment, the fitted distributions and their p-values are listed and ranked
according to the highest p-value to the lowest p-value, indicating the best to the worst statistical
fit to the various probability distributions. The empirical data and fitted theoretical distributions
are shown graphically, and the statistical moments are shown for the actual data versus the
theoretically fitted distribution’s moments. After deciding on which distributions to use, users
can then run the simulations.

FIGURE 26 Fitted Distributions on Operational Risk Data.

Figure 27 illustrates the Operational Risk—Risk Simulated Losses subtab using convolution of
frequency and severity of historical losses, where, depending on which risk segment and business
line was selected, the relevant probability distribution results from the Monte Carlo risk
simulations are displayed, including the simulated results on Frequency, Severity, and the
multiplication between frequency and severity, termed Expected Loss Distribution, as well as the
Extreme Value Distribution of Losses (this is where the extreme losses in the dataset are fitted
to the extreme value distributions—see the case study for details on extreme value distributions
and their mathematical models). Each of the distributional charts has its own confidence and
percentile inputs where users can select one-tail (right-tail or left-tail) or two-tail confidence
intervals and enter the percentiles to obtain the confidence values (e.g., user can enter right-tail
99.90% percentile to receive the VaR confidence value of the worst-case losses on the left tail’s
0.10%).

40 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 27 Monte Carlo Risk Simulated Operational Losses.

Procedures:  Proceed to the Operational Risk | Loss Data & Fitting (AMA) tab (Figure 25). This tab
Operational Risk allows users to enter historical loss data with associated Risk Types, Business Units,
(Advanced Actual Loss Amounts, and a Date Index, and each of these variables is entered in its
Measurement own data grid column. This allows users to segregate the losses by business units and
Approach) risk types, and perform probability distribution fitting. Using these fitted distributions,
the Frequency of Loss Events is multiplied by the Severity of Losses to obtain the
Expected Loss Distribution.
o Start by entering the data as described above.
o Change the settings and select the relevant variables, i.e., which variable is the
actual loss data, risk category, business unit, and the date index.
o Use at least 10,000 to 100,000 simulation Trials, and if you wish to replicate the
simulation results each time when you rerun the model, then check the Use
Seed Value and enter a numerical integer (e.g., 123).
o Save the settings and data if required.
o Select the distributional fitting routine (use Kolmogorov–Smirnov when in
doubt) and click Run Distribution Fitting.
 Proceed to the Operational Risk | Fitted Loss Distribution (AMA) tab (Figure 26). This
tab shows the fitted distributions for the different Risk Segment and Business Lines.
Select one of the segmentations and users will see a list of the best-fitting distributions
(highest P-Value) to the least-fitting distributions (lowest P-Value), as well as the actual

41 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

loss data’s moments compared to the theoretically fitted distributions’ moments. Select
the best-fit distribution when in doubt. Click Run Simulation when done to simulate
and obtain the convoluted simulation results.
 Proceed to the Operational Risk | Simulated Losses (AMA) tab (Figure 27).
o Select the subsegment from the droplist to see the three probability
distributions (Frequency, Severity, and Expected Losses).
o Select the Tail Type (e.g., Left-Tail ≤) and enter a VaR percentile, e.g., 99.9%,
and hit Tab on the keyboard to update the charts.

6.3 AMA: Basel OPCAR Convolution Model


Figure 28 shows the computations of Basel II/III’s OPCAR (Operational Capital at Risk) model
where the probability distribution of risk event Frequency is multiplied by the probability
distribution of Severity of operational losses, the approach where Frequency × Severity is termed
the Single Loss Approximation (SLA) model. The SLA is computed using convolution methods
of combining multiple probability distributions. SLA using convolution methods is complex and
very difficult to compute and the results are only approximations, and valid only at the extreme
tails of the distribution (e.g., 99.9%). However, as can be seen in the Appendix, Monte Carlo Risk
Simulation provides a simpler and more powerful alternative when convoluting and multiplying
two distributions of random variables to obtain the combined distribution. Clearly the challenge
is setting the relevant distributional input parameters. This is where the data-fitting and percentile-
fitting tools come in handy, as will be explained later.
Figure 29 shows the convolution simulation results where the distribution of loss frequency,
severity, and expected losses are shown. The resulting Expected Losses (EL), Unexpected Losses
(UL), and Total Operational Capital at Risk (OPCAR) are also computed and shown. EL is, of
course, the mean value of the simulated results, OPCAR is the tail-end 99.90th percentile, and
UL is the difference between OPCAR and EL.
Figure 30 shows the loss severity data fitting using historical loss data. Users can paste historical
loss data, select the required fitting routines (Kolmogorov–Smirnov, Akaike Criterion, Bayes
Information Criterion, Anderson–Darling, Kuiper’s Statistic, etc.) and run the data fitting
routines. When in doubt, use the Kolmogorov–Smirnov routine. The best-fitting distributions,
p-values, and their parameters will be listed, and the same interpretation applies as previously
explained.
Figure 31 shows the loss severity percentile fitting instead, which is particularly helpful when there
are no historical loss data and where there only exists high-level management assumptions of the
probabilities certain events occur. In other words, by entering a few percentiles (%) and their
corresponding values, one can obtain the entire distribution’s parameters.

42 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 28 Basel OPCAR—Frequency and Severity Assumptions.

FIGURE 29 Basel OPCAR—Convoluted Simulation Results.

43 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 30 Basel OPCAR—Loss Severity Fitting: Data.

FIGURE 31 Basel OPCAR—Loss Severity Fitting: Percentile.

44 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Procedures:  Proceed to the Operational Risk | Basel OPCAR (AMA) | Frequency and Severity
Operational Risk Assumptions tab (Figure 28). Advanced users can enter simulation input parameters for
(Basel OPCAR the selected distribution(s) and click Convolute and Simulate. Alternatively, skip this tab
Convolution and proceed directly to the Loss Severity Fitting tab to run probability distributional
Model) fitting routines to compute the best-fitting distributions for use in this tab.
 Proceed to the Operational Risk | Basel OPCAR (AMA) | Convoluted Simulation
Results tab (Figure 29). The convolution simulation results of the distribution of loss
frequency, severity, and expected losses are shown. The resulting Expected Losses (EL),
Unexpected Losses (UL), and Total Operational Capital at Risk (OPCAR) are also
computed and shown below the results grid.
o EL is the mean value of the simulated results, OPCAR is the tail-end 99.90th
percentile, and UL is the difference between OPCAR and EL.
o Select the Tail Type (e.g., Left-Tail ≤) and enter a VaR percentile, e.g., 99.9%,
and hit Tab on the keyboard to update the charts.
o Results are shown in the data results grid. The same interpretation of the results
as explained previously still applies.
o Note that if the Lambda input is < 5 or > 100 then the Convolution of
OPCAR result will show the Left-Tail Percentile % based on the simulated
results. If Lambda is between 5 and 100, and the Run Convolution Models
checkbox is selected, then the results will be based on the Convolution
algorithm described in Appendix 2.
 Proceed to the Operational Risk | Basel OPCAR (AMA) | Loss Severity Fitting tab
and select the first radio-button item: “Use historical loss data and distributional fitting”
(Figure 30). The same interpretation of the results as explained previously still applies in
terms of distributional fitting. The difference is that here, the loss data is free flowing
and users can paste loss data without the need for specifying the business lines or risk
types. Remember to save the settings and data. Finally, you can click the Paste Fitted
Parameters button to transfer the fitted parameters to the Frequency and Severity
Assumptions tab.
 Proceed to the Operational Risk | Basel OPCAR (AMA) | Loss Severity Fitting tab
and select the second radio-button item: “Use subject matter estimates and percentile
fitting” (Figure 31). This tab uses the loss severity percentile fitting instead, which is
particularly helpful when there are no historical loss data and where there only exists
high-level management assumptions of the probabilities certain events occur. In other
words, by entering a few percentiles (%) and their corresponding values, one can obtain
the entire distribution’s parameters.

45 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

A1
APPENDIX 1: CONVOLUTION &
SIMULATION

C onvolution Theory is explained as they are applied to probability distributions and


stochastic modeling, both in theory and practice. It attempts to show that, in theory,
convolution and copulas are elegant and critical in solving basic distributional moments but
when it comes to practical applications, these theories are impractical and mathematically
intractable, resulting in the need for running empirical Monte Carlo simulations, where the results of
said empirical simulations approach the theoretically predicted results at the limit, allowing
practitioners a powerful practical toolkit for modeling. Many probability distributions are both
flexible and interchangeable. For example:
 Arcsine and Parabolic distributions are special cases of the Beta distribution.
 Binomial and Poisson distributions approach the Normal distribution at the limit.
 Binomial distribution is a Bernoulli distribution with multiple trials.
 Chi-Square distribution is the squared sum of multiple Normal distributions.
 Discrete Uniform distributions’ sum (12 or more) approaches the Normal distribution.
 Erlang distribution is a special case of the Gamma distribution.
 Exponential distribution is the inverse of the Poisson distribution on a continuous basis.
 F distribution is the ratio of two Chi-Square distributions.
 Gamma distribution is related to the Lognormal, Exponential, Pascal, Erlang, Poisson,
and Chi-Square distributions.
 Laplace distribution comprises two Exponential distributions in one.
 Lognormal distribution’s logarithmic values approach the Normal distribution.
 Pascal distribution is a shifted Negative Binomial distribution.
 Pearson V distribution is the inverse of the Gamma distribution.
 Pearson VI distribution is the ratio of two Gamma distribution.
 PERT distribution is a modified Beta distribution.
 Rayleigh distribution is a modified Weibull distribution.
 T distribution with high degrees of freedom (> 30) approaches the Normal distribution.

46 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Mathematicians came up with these distributions through the use of convolution. As a quick
introduction, if there are two independent and identically distributed (i.i.d.) random variables, X
and Y, and where their respectively known probability density functions (pdf) are f X ( x) and fY ( y )
we can then generate a new probability distribution by combining X and Y using basic
summation, multiplication, and division. Some examples are listed above, e.g., the F distribution
is a division of two Chi-Square distributions, the normal distribution is a sum of multiple uniform
distributions, etc. To illustrate how this works, consider the cumulative distribution function (cdf)
of a joint probability distribution between the two random variables X and Y:
  ux 
FX Y (u )   f ( x, y ) dx dy    y 
 f ( x, y) dy  dx
x  y u  
Differentiating the cdf equation above yields the pdf:

f X Y (u ) 

 f ( x, u  x) dx

A1.1 Convolution of Two Uniforms


Example 1: The convolution of the simple sum of two identical and independent uniform
distributions approaches the triangular distribution.
As a simple example, if we take the sum of two i.i.d. uniform distributions with a minimum of 0
and maximum of 1, we have:

f X Y (u )  

f ( x) f (u  x) dx

where for a Uniform [0, 1] distribution, f (x)  1 when 0  x  1 , we have:


1 u
u u 1
f X Y (u )   f (u  x) dx   f (t ) dt  
0 u 1 2  u 1 u  2

which approaches a simple triangular distribution.

Figure 32 shows an empirical approach where two Uniform [0, 1] distributions are simulated for
20,000 trials and their sums added. The computed empirical sums are then extracted and the raw
data fitted using the Kolmogorov-Smirnov fitting algorithm in Risk Simulator. The triangular
distribution appears as the best-fitting distribution with a 74% goodness of fit. As seen in the
convolution of only two uniform distributions, the result is a simple triangular distribution.

47 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 32 Convolution with Simulation of Two Uniforms.

A1.2 Convolution of Twelve Uniforms


Example 2: The convolution simple sum of twelve identical and independent uniform
distributions approaches the normal distribution.
If we take the same approach and simulate 12 i.i.d. Uniform [0, 1] distributions and summed
them, we would obtain a very close to perfect Normal distribution as shown in Figure 33, with a
goodness of fit at 99.3% after running 20,000 simulation trials.

48 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 33 Convolution with Simulation of Twelve Uniforms.

A1.3 Convolution of Multiple Exponentials


Example 3: The convolution simple sum of multiple identical and independent exponential
distributions approaches the gamma (Erlang) distribution.
In this example, we sum two i.i.d. exponential distributions and generalize it to multiple
distributions. To get started, we use two identical Exponential [ = 2] distributions:
Z Z
f X  Y ( z )   f X ( x) fY ( z  x) dx    e   X  e   ( Z  X ) dx   2 ze   Z
0 0

is the pdf for the exponential distribution for all x  0;   0 and the
 X
where f ( x)  e
distribution’s mean is = 1/
If we generalize to n random i.i.d. exponential distributions and apply mathematical induction:
x n 1e x / 
f X1  X 2 ... X n ( x)   [0, n,1 /  ]
(n  1)! n

x 1e  x / 
f ( x)  with any value of   0 and   0
 ( )  

49 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

This is, of course, the generalized gamma distribution with  and  for the shape and scale
parameters:
f X1  X 2 ... X n ( x)  [0, n,1/  ]  [0, ,  ]

When the  parameter is a positive integer, the gamma distribution is called the Erlang
distribution, used to predict waiting times in queuing systems, where the Erlang distribution is
the sum of independent and identically distributed random variables each having a memoryless
exponential distribution. Setting n as the number of these random variables, the mathematical
construct of the Erlang distribution is:
x 1e  x
f ( x)  for all x > 0 and all positive integers of 
(  1)!

The empirical approach is shown below where we have two exponential distributions with  =
2 (this means that the mean  = 1/ = 0.5). The sum of these two distributions, after running
20,000 Monte Carlo simulation trials and extracting and fitting the raw simulated sum data, shows
a 99.4% goodness of fit when fitted to the gamma distribution where the  = 2 and  = 0.5
(rounded), corresponding to n = 2 and  = 2.

FIGURE 34 Convolution with Simulation of Multiple Exponentials.

50 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

A2
APPENDIX 2: CONVOLUTION OF
MULTIPLICATION OF FREQUENCY
AND SEVERITY DISTRIBUTIONS IN
OPERATIONAL RISK CAPITAL
MODEL IN BASEL III
By Dr. Johnathan Mun, Ph.D., MBA, MS, BS, CRM, FRM, CFC, MIFC

Introduction In October 2014, the Basel Committee on Banking Supervision released a Basel Consultative
Document entitled, “Operational Risk: Revisions to the Simpler Approaches,” and in it describes
the concepts of operational risk as the sum product of frequency and severity of risk events within
a one-year time frame and defines the Operational Capital at Risk (OPCAR) as the tail-end 99.9%
Value at Risk.
The Basel Consultative Document describes a Single Loss Approximation (SLA) model defined
as 1 1 , where the inverse of the compound distribution
is the summation of the unexpected losses 1 and expected losses
1 ;  is the Poisson distribution’s input parameter (average frequency per
period; in this case, 12 months); and X represents one of several types of continuous probability
distributions representing the severity of the losses (e.g., Pareto, Log Logistic, etc.). The
Document further states that this is an approximation model limited to subexponential-type
distributions only and is fairly difficult to compute. The X distribution’s cumulative distribution
function (CDF) will need to be inverted using Fourier transform methods, and the results are
only approximations based on a limited set of inputs and their requisite constraints. Also, as
discussed below, the SLA model proposed in the Basel Consultative Document significantly
underestimates OPCAR.
This current appendix provides a new and alternative convolution methodology to compute
OPCAR that is applicable across a large variety of continuous probability distributions for risk
severity and includes a comparison of their results with Monte Carlo risk simulation methods. As
will be shown, both the new algorithm using numerical methods to model OPCAR and the
Monte Carlo risk simulation approach tend to the same results, and seeing that simulation can be
readily and easily applied in the CMOL software and Risk Simulator software (source:

51 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

www.realoptionsvaluation.com), we recommend using simulation methodologies for the sake of


simplicity. While the Basel Committee has, throughout its Basel II-III requirements and
recommendations, sought after simplicity so as not to burden banks with added complexity, it
still requires sufficient rigor and substantiated theory. Monte Carlo risk simulation methods pass
the test on both fronts and are, hence, the recommended path when modeling OPCAR.

Problem with We submit that the SLA estimation model proposed in the Basel Consultative Document is
Basel OPCAR insufficient and significantly underestimates an actual OPCAR value. A cursory examination
shows that with various  values, such as  = 1,  = 10,  = 100,  = 1000, the
1 will yield 1 probability values (η) of 0.999, 0.9999, 0.99999, and
0.999999. for any severity distribution X will only yield the severity distribution’s
values, and not the total unexpected losses. For instance, suppose the severity distribution (X) of
a single risk event on average ranges from $1M (minimum) to $2M (maximum), and, for
simplicity, assume it is a Uniformly distributed severity of losses. Further suppose that the average
frequency of events is 1,000 times per year. Based on back of the envelop calculation, one could
then conclude that the absolute highest operational risk capital losses will never exceed $2B per
year (this assumes the absolute worst case scenario of $2M loss per event multiplied by 1,000
events in that entire year). Nonetheless, using the inverse of the X distribution at η = 0.999999
will yield a value close to $2M only, and adding that to the adjusted expected value of EL (let’s
just assume somewhere close to $1.5B based on the Uniform distribution) is still a far cry from
the upper end of $2B.
Figure 35 shows a more detailed calculation that proves the Basel Consultative Document’s SLA
approximation method significantly understates the true distributional operational Value at Risk
amount. In the figure, we test four examples of a Poisson–Weibull convolution. The Poisson
distribution with Lambda risk event frequency  = 10,  = 25,  = 50, and  = 100 are tested,
together with a Weibull risk severity distribution:  = 1.5 and β = 2.5. These values are shown as
highlighted cells in the figure. Using the Basel OPCAR model, we compute the UL and EL. In
the UL computation, we use 1 . The column labeled PROB is .
The ICDF X column denotes the . By applying the inverse of the Weibull CDF
on the probability, we obtain the UL values. Next, the EL calculations are simply
1 with being the expected value of the Weibull distribution X, where
Γ 1 1/α . The OPCAR is simply . The four OPCAR results obtained are 31.30,
65.87, 122.82, and 236.18.
We then tested the results using Monte Carlo risk simulation using the Risk Simulator software
(source: www.realoptionsvaluation.com) by setting four Poisson distributions with their
respective  values and a single Weibull distribution with  = 1.5 and β = 2.5. Then, the Weibull
distribution is multiplied by each of the Poisson distributions to obtain the four Total Loss
Distributions. The simulation was run for 100,000 trials and the results are shown in Figure 35 as
forecast charts at the bottom. The Left Tail ≤ 99.9% quantile values were obtained and can be
seen in the charts (116.38, 258.00, 476.31, and 935.25). These are significantly higher than the
four OPCAR results.
Next, we ran a third approach using the newly revised convolution algorithm we propose in this
appendix. The convolution model shows the same values as the Monte Carlo risk simulation
results: 116.38, 258.00, 476.31, and 935.25, when rounded to two decimals. The inverse of the
convolution function computes the corresponding CDF percentiles and they are all 99.9%
(rounded to one decimal; see the Convolution and Percentile columns in Figure 35). Using the

52 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

same inverse of the convolution function and applied to the Basel Consultative Document’s SLA
model results, we found that the four SLA results were at the following OPCAR percentiles:
75.75%, 66.94%, 62.78%, and 60.38%, again significantly different than the requisite 99.9% Value
at Risk level for operational risk capital required by the Basel Committee.
Therefore, due to this significant understatement of operational capital at risk, the remainder of
this appendix focuses on explaining the theoretical details of the newly revised convolution model
we developed that provides exact OPCAR results under certain conditions. We then compare
the results using Monte Carlo risk simulation methods using Risk Simulator software as well as
the Credit, Market, Operational, and Liquidity (CMOL) Risk software (source:
www.realoptionsvaluation.com). Finally, the caveats and limitations of this new approach as well
as conclusions and recommendations are presented.

FIGURE 35 Comparing Basel OPCAR, Monte Carlo Risk Simulation, and the
Convolution Algorithm

Theory Let X, Y, and Z be real-valued random variables whereby X and Y are independently distributed
with no correlations. Further, we define , , and as their corresponding CDFs. Next, we
assume that X is a random variable denoting the Frequency of a certain type of operational risk
occurring and is further assumed to have a discrete Poisson distribution. Y is a random variable
denoting the Severity of the risk (e.g., monetary value or some other economic value) and can be
distributed from among a group of continuous distributions (e.g., Fréchet, Gamma, Log Logistic,
Lognormal, Pareto, Weibull, etc.). Therefore, Frequency × Severity equals the Total Risk Losses, which
we define as Z, where Z X Y.

Then the Total Loss formula, which is also sometimes known as the Single Loss Approximation
(SLA) model, yields:

53 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

P Z t P XY t | X k P X k

P Z t P kY t│X k P X k

Where the term with 0 is treated separately:

P 0 t│X 0 P X 0 P Y t/k P X k

∑ / Sign t P X 0 (Equation 1)

This is because we know that P(0 < t)P(X = 0) = 0 for t < 0 and also P(0 < t)P(X = 0) = 0 for t
≥ 0. The next step is the selection of the number of summands in Equation 1. As previously
assumed, P X k is a Poisson distribution where P X k and the rate
!
of convergence in the series depends solely on the rate of convergence to 0 of and does not
!
depend on t, whereas the second multiplier P Y t/k 1! Therefore, for all values of t and
an arbitrary δ > 0 there is value of n such that:

∑ / (Equation 2)
!

In our case, δ can be set, for example, to 1/1000. Thus, instead of solving the quantile equation
for with an infinite series, on the left-hand side of the equation we have:

∑ (Equation 3)
!

We can then solve the equation:

, ∑ / (Equation 4)
!
with only n summands.

For example, if we choose p = 0.95, δ =1/1000, and n such that Equation 2 takes place, then the
solution of Equation 4 is such that:
, 1/1000 (Equation 5)
In other words, a quantile found from Equation 4 is almost the true value, with a resulting error
precision in probability of less than 0.1%.

The only outstanding issue that remains is to find an estimate for n given any level of δ. We have:

∑ / ∑ (Equation 6)
! !

54 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

The exponential series λ ∑ in Equation 6 is bounded by by applying the


! !
Taylor’s Expansion Theorem, with the remainder of the function left for higher exponential
function expansions. By substituting the upper bound for λ in Equation 6, we have:

∑ / (Equation 7)
! !

Now we need to find the lower bound in n for the solution of the inequality:

(Equation 8)
!

Consider the following two cases:

1. If λ 1, then n 1 e . Consequently, we can solve the


! !
inequality n 1 e . Since grows quickly, we can simply take n
ln δ. For example, for 1/1000 it is sufficient to set 7 to satisfy Equation
8.
2. If 1, then, in this case, using the same bounds for the factorial, we can choose n
such that:
1 1 1 (Equation 9)

To make the second multiplier greater than 1, we will need to choose 1. Now, it
will be enough to set n such that it satisfies the inequality 1 . The choice
of n can now be formulated as follows:
 Choose such that 1 and then choose such that
1 .
 For example, if we have 10, 1/1000 then 1 9.97, so
let’s set 10. Since – 6.9 it is sufficient to set 10 to satisfy
Equation 9.

Approximation to the solution of the equation for a quantile value


From the previous considerations we found that instead of solving for , we can
solve , ∑ / with n set at the level indicated above. The value
!
for resulting from such a substitution will satisfy the inequality
, .

55 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Solution of the equation ,


By moving t to the left one unit at a time, we can find the first occurrence of the event
such that , . Similarly, moving t to the right we can find b such that , .
Now we can use a simple Bisection Method or other search algorithms to find the optimal
solution to , .

Empirical Results: Based on the explanations and algorithms outlined above, the convolution approximation
Convolution versus models are run and results compared with Monte Carlo risk simulation results. These
Monte Carlo Risk comparisons will serve as empirical evidence of the applicability of both approaches.
Simulation for Figure 36 shows the 10 most commonly used Severity distributions, namely, Exponential,
OPCAR Fréchet, Gamma, Logistic, Log Logistic, Lognormal (Arithmetic and Logarithmic inputs),
Gumbel, Pareto, and Weibull. The Frequency of risk occurrences is set as Poisson, with Lambda
() or average frequency rate per period as its input. The input parameters for the 10 Severity
distributions are typically Alpha (α) and Beta (β), except for the Exponential distribution that uses
a rate parameter, Rho (ρ), and Lognormal distribution that requires the mean () and standard
deviation (σ) as inputs. For the first empirical test, we set  = 10, α = 1.5, β = 2.5, ρ = 0.01,  =
1.8, and σ = 0.5 for the Poisson frequency and 10 severity distributions. The Convolution Model
row in Figure 36 was computed using the algorithms outlined above, and a set of Monte Carlo
risk simulation assumptions were set with the same input parameters and simulated 100,000 trials
with a prespecified seed value. The results from the simulation were pasted back into the model
under the Simulated Results row and the Convolution Model was calculated based on these
simulated outputs. Figure 36 shows 5 sets of simulation percentiles: 99.9%, 99.0%, 95.0%, 90.0%,
and 50.0%. As can be seen, all of the simulation results and the convolution results on average
agree to approximately within ±0.2%.
Figure 37 shows another empirical test whereby we select one specific distribution; in the
illustration, we used the Poisson–Weibull compound function. The alpha and beta parameters in
Weibull were changed, in concert with the Poisson’s lambda input. The first four columns show
alpha and beta being held steady while changing the lambda parameter, whereas the last six
columns show the same lambda with different alpha and beta input values (increasing alpha with
beta constant, and increasing beta with alpha constant). When the simulation results and the
convolution results were compared, on average, they agree to approximately within ±0.2%.
Figure 38 shows the Credit, Market, Operational, and Liquidity (CMOL) risk software’s
operational risk module and how the simulation results agree with the convolution model. The
CMOL software uses the algorithms as described above. The CMOL software settings are
100,000 Simulation Trials with a Seed Value of 1 with an OPCAR set to 99.90%.
Figures 39–42 show additional empirical tests where all 10 severity distributions were perturbed,
convoluted, and compared with the simulation results. The results agree on average around
±0.3%.

56 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 36 Comparing Convolution to Simulation Results I

FIGURE 37 Comparing Convolution to Simulation Results II

57 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 38 Comparing Convolution to Simulation Results III

FIGURE 39 Empirical Results 1: Small Value Inputs

FIGURE 40 Empirical Results 2: Average Value Inputs

58 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

FIGURE 41 Empirical Results 3: Medium Value Inputs

FIGURE 42 Empirical Results 4: High Value Inputs

High Lambda and As seen in Equation 4, we have the , ∑ / convolution model.


Low Lambda !
The results are accurate to as many decimal-points precision as desired as long as n is sufficiently
Limitations
large, but this would mean that the convolution model is potentially mathematically intractable.
When  and k are high (the value k depends on the Poisson rate ), such as  = 10,000, the
summand cannot be easily computed. For instance, Microsoft Excel 2013 can only compute up
to a factorial of 170! where 171! and above returns the #NUM! error. Banks whose operational
risks have large  rate values (extremely high frequency of risk events when all risk types are
lumped together into a comprehensive frequency count) have several options: Create a
breakdown of the various risk types (broken down by risk categories, by department, by division,
etc.) such that the  is more manageable; use a continuous distribution approximation as shown
below; or use Monte Carlo risk simulation techniques, where large  values will not pose a
problem whatsoever.
Poisson distributions with large  values approach the Normal distribution, and we can use this
fact to generate an approximation model for the convolution method. The actual deviation
between Poisson and Normal approximation can be estimated by the Berry–Esseen inequality.

59 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

For a more accurate and order of magnitude tighter estimation we can use the Wilson–Hilferty
approximation instead.
For this large lambda situation, we will need to rewrite FZ(t) with respect to values of Y instead
of using values of Poisson-distributed X as was done in Equation 1. Because Y is continuously
distributed, the summation is replaced with integration:
F t P X t/y y dy (Equation 10)
Here, y is the PDF of Y. Integration is applied over the range of Y. According to the Wilson–
Hilferty approximation, we have:
P X t/y y dy Φ c – μ /σ y dy I t (Equation 11)
/
where Φ is the standard normal CDF, / 1 / , 1– 1/ 9 / 9 ,
and 1/ 3 1 / .
Next, similar to the approach for the small lambda case, we have to find values a and b for t such
that and and subsequently apply the bisection method to find an
approximation to . As described, instead of a simple summation, we need to perform numerical
integration within the corresponding loop.
In addition, there are other simpler approximations via the Berry–Esseen inequality. For instance,
let , 1, 1, … , , where are independent random variables and Poisson
distributed and with Mean = 1. This means that 1, 2. By shifting and
rescaling or normalization of , that is, /√ , we have:
P X t/y y dy Φ t/y λ /√ y dy (Equation 12)
If / /√ , then Berry–Esseen inequality states:
│ │ /√ , and C < 0.4748, | 1| , and is Poisson
distributed with a mean of 1. Apparently we cannot expect better than 1/√ rate of convergence
to the normal distribution asymptotically. In summary, solve Equation 12 and set it equal to p by
internally searching for the optimal t value.
Finally, for low lambda values, the algorithm still runs but will be a lot less accurate. Recall in
Equation 2 that ∑ / where δ signifies the level of error precision (the
!
lower the value, the higher the precision and accuracy of the results). The problem is, with low 
values, both k and n, which depend on , will also be low. This means that in the summand there
would be an insufficient number of integer intervals, making the summation function less
accurate. For best results,  should be between 5 and 100.

Caveats, Based on the theory, application, and empirical evidence above, one can conclude that the
Conclusions, and convolution of Frequency × Severity independent stochastic random probability distributions
Recommendations can be modeled using the algorithms outlined above as well as using Monte Carlo simulation
methods. On average, the results from these two methods tend to converge with some slight
percentage variation due to randomness in the simulation process and the precision depending
on the number of intervals in the summand or numerical integration techniques employed.
However, as noted, the algorithms described above are only applicable when the lambda
parameter 5 ≤  ≤ 100, else the approximation using numerical integration approach is required.

60 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

In contrast, Monte Carlo risk simulation methods are applicable in any realistic lambda situation
(in simulation, a high lambda condition can be treated by using a Normal distribution). As both
the numerical method and simulation approach tend to the same results, and seeing that
simulation can be readily and easily applied in CMOL and using Risk Simulator, we recommend
using simulation methodologies for the sake of simplicity. The Basel Committee has, throughout
its Basel II-III requirements and recommendations, sought for simplicity so as not to burden the
banks with added complexity, and yet it still requires sufficient rigor and substantiated theory.
Therefore, Monte Carlo risk simulation methods are the recommended path when it comes to
modeling OPCAR.

61 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

A3
APPENDIX 3: PROBABILITY
DISTRIBUTIONS
This Appendix demonstrates the power of Monte Carlo simulation, but to get started with
simulation, one first needs to understand the concept of probability distributions. To begin to
understand probability, consider this example: You want to look at the distribution of nonexempt
wages within one department of a large company. First, you gather raw data––in this case, the
wages of each nonexempt employee in the department. Second, you organize the data into a
meaningful format and plot the data as a frequency distribution on a chart. To create a frequency
distribution, you divide the wages into group intervals and list these intervals on the chart’s
horizontal axis. Then you list the number or frequency of employees in each interval on the
chart’s vertical axis. Now you can easily see the distribution of nonexempt wages within the
department.
A glance at the chart illustrated in Figure 43 reveals that most of the employees (approximately
60 out of a total of 180) earn from $7.00 to $9.00 per hour.

60

50

Number of 40
Employees
30

20

10

7.00 7.50 8.00 8.50 9.00

Hourly Wage Ranges in Dollars

FIGURE 43 Frequency Histogram I.

You can chart this data as a probability distribution. A probability distribution shows the number
of employees in each interval as a fraction of the total number of employees. To create a
probability distribution, you divide the number of employees in each interval by the total number
of employees and list the results on the chart’s vertical axis.

62 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

The chart in Figure 44 shows you the number of employees in each wage group as a fraction of
all employees; you can estimate the likelihood or probability that an employee drawn at random
from the whole group earns a wage within a given interval. For example, assuming the same
conditions exist at the time the sample was taken, the probability is 0.33 (a one in three chance)
that an employee drawn at random from the whole group earns between $8.00 and $8.50 an
hour.

0.33

Probability

7.00 7.50 8.00 8.50 9.00


Hourly Wage Ranges in Dollars

Figure 44 Frequency Histogram II.


Probability distributions are either discrete or continuous. Discrete probability distributions
describe distinct values, usually integers, with no intermediate values and are shown as a series
of vertical bars. A discrete distribution, for example, might describe the number of heads in
four flips of a coin as 0, 1, 2, 3, or 4. Continuous distributions are actually mathematical
abstractions because they assume the existence of every possible intermediate value between
two numbers. That is, a continuous distribution assumes there is an infinite number of values
between any two points in the distribution. However, in many situations, you can effectively use
a continuous distribution to approximate a discrete distribution even though the continuous
model does not necessarily describe the situation exactly.

Selecting the Right Plotting data is one guide to selecting a probability distribution. The following steps provide
Probability another process for selecting probability distributions that best describe the uncertain variables
Distribution in your spreadsheets:
 Look at the variable in question. List everything you know about the conditions
surrounding this variable. You might be able to gather valuable information about the
uncertain variable from historical data. If historical data are not available, use your own
judgment, based on experience, listing everything you know about the uncertain
variable.
 Review the descriptions of the probability distributions.
 Select the distribution that characterizes this variable. A distribution characterizes a
variable when the conditions of the distribution match those of the variable.
Monte Carlo Monte Carlo simulation in its simplest form is a random number generator that is useful for
Simulation forecasting, estimation, and risk analysis. A simulation calculates numerous scenarios of a model
by repeatedly picking values from a user-predefined probability distribution for the uncertain
variables and using those values for the model. As all those scenarios produce associated results
in a model, each scenario can have a forecast. Forecasts are events (usually with formulas or

63 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

functions) that you define as important outputs of the model. These usually are events such as
totals, net profit, or gross expenses.
Simplistically, think of the Monte Carlo simulation approach as repeatedly picking golf balls out
of a large basket with replacement. The size and shape of the basket depend on the distributional
input assumption (e.g., a normal distribution with a mean of 100 and a standard deviation of
10, versus a uniform distribution or a triangular distribution) where some baskets are deeper or
more symmetrical than others, allowing certain balls to be pulled out more frequently than others.
The number of balls pulled repeatedly depends on the number of trials simulated. For a large
model with multiple related assumptions, imagine a very large basket wherein many smaller
baskets reside. Each small basket has its own set of golf balls that are bouncing around.
Sometimes these small baskets are linked with each other (if there is a correlation between the
variables) and the golf balls are bouncing in tandem, while other times the balls are bouncing
independent of one another. The balls that are picked each time from these interactions within
the model (the large central basket) are tabulated and recorded, providing a forecast output
result of the simulation.
With Monte Carlo simulation, Risk Simulator generates random values for each assumption’s
probability distribution that are totally independent. In other words, the random value selected
for one trial has no effect on the next random value generated. Use Monte Carlo sampling when
you want to simulate real-world what-if scenarios for your spreadsheet model.
The two following sections provide a detailed listing of the different types of discrete and
continuous probability distributions that can be used in Monte Carlo simulation.

64 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Discrete Distributions
Bernoulli or Yes/No The Bernoulli distribution is a discrete distribution with two outcomes (e.g., head or tails, success
Distribution or failure, 0 or 1). It is the binomial distribution with one trial and can be used to simulate Yes/No
or Success/Failure conditions. This distribution is the fundamental building block of other more
complex distributions. For instance:
 Binomial distribution: a Bernoulli distribution with higher number of n total trials that
computes the probability of x successes within this total number of trials.
 Geometric distribution: a Bernoulli distribution with higher number of trials that
computes the number of failures required before the first success occurs.
 Negative binomial distribution: a Bernoulli distribution with higher number of trials that
computes the number of failures before the Xth success occurs.
The mathematical constructs for the Bernoulli distribution are as follows:
1  p for x  0
P ( n)  
p for x  1
or
P (n)  p x (1  p )1 x
Mean  p

Standard Deviation  p (1  p )

Skewness = 1 2p
p (1  p )

Excess Kurtosis = 6 p  6 p  1
2

p(1  p)
Probability of success (p) is the only distributional parameter. Also, it is important to note that
there is only one trial in the Bernoulli distribution, and the resulting simulated value is either 0 or
1.
Input requirements: Probability of success > 0 and < 1 (i.e., 0.0001 ≤ p ≤ 0.9999).

Binomial The binomial distribution describes the number of times a particular event occurs in a fixed
Distribution number of trials, such as the number of heads in 10 flips of a coin or the number of defective
items out of 50 items chosen.
The three conditions underlying the binomial distribution are:
 For each trial, only two outcomes are possible that are mutually exclusive.
 Trials are independent––what happens in the first trial does not affect the next trial.
 The probability of an event occurring remains the same from trial to trial.
The mathematical constructs for the binomial distribution are as follows:
n!
P( x)  p x (1  p) ( n x ) for n  0; x  0, 1, 2, ... n; and 0  p  1
x!(n  x)!

65 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Mean  np
Standard Deviation  np (1  p )

Skewness = 1 2 p
np(1  p)

Excess Kurtosis = 6 p  6 p  1
2

np (1  p )

Probability of success (p) and the integer number of total trials (n) are the distributional
parameters. The number of successful trials is denoted x. It is important to note that probability
of success (p) of 0 or 1 are trivial conditions that do not require any simulations and, hence, are
not allowed in the software.
Input requirements: Probability of success > 0 and < 1 (i.e., 0.0001 ≤ p ≤ 0.9999).
Number of trials ≥ 1 or positive integers and ≤ 1000 (for larger trials, use the normal distribution
with the relevant computed binomial mean and standard deviation as the normal distribution’s
parameters).

Discrete Uniform The discrete uniform distribution is also known as the equally likely outcomes distribution, where the
distribution has a set of N elements and each element has the same probability. This distribution
is related to the uniform distribution but its elements are discrete and not continuous.
The mathematical constructs for the discrete uniform distribution are as follows:
1
P( x) 
N
N 1
Mean = 2 ranked value
( N  1)( N  1)
Standard Deviation = 12 ranked value
Skewness = 0 (i.e., the distribution is perfectly symmetrical)
 6( N 2  1)
Excess Kurtosis = 5( N  1)( N  1) ranked value
Input requirements: Minimum < maximum and both must be integers (negative integers and
zero are allowed).

Geometric The geometric distribution describes the number of trials until the first successful occurrence,
Distribution such as the number of times you need to spin a roulette wheel before you win.

The three conditions underlying the geometric distribution are:


 The number of trials is not fixed.
 The trials continue until the first success.
 The probability of success is the same from trial to trial.

66 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

The mathematical constructs for the geometric distribution are as follows:


P ( x)  p (1  p ) x 1 for 0  p  1 and x  1, 2, ..., n
1
Mean  1
p
1 p
Standard Deviation 
p2

Skewness = 2  p
1 p

Excess Kurtosis = p  6 p  6
2

1 p
Probability of success (p) is the only distributional parameter. The number of successful trials
simulated is denoted x, which can only take on positive integers.
Input requirements: Probability of success > 0 and < 1 (i.e., 0.0001 ≤ p ≤ 0.9999). It is important
to note that probability of success (p) of 0 or 1 are trivial conditions that do not require any
simulations and, hence, are not allowed in the software.

Hypergeometric The hypergeometric distribution is similar to the binomial distribution in that both describe the
Distribution number of times a particular event occurs in a fixed number of trials. The difference is that
binomial distribution trials are independent, whereas hypergeometric distribution trials change
the probability for each subsequent trial and are called “trials without replacement.” For example,
suppose a box of manufactured parts is known to contain some defective parts. You choose a
part from the box, find it is defective, and remove the part from the box. If you choose another
part from the box, the probability that it is defective is somewhat lower than for the first part
because you have already removed a defective part. If you had replaced the defective part, the
probabilities would have remained the same, and the process would have satisfied the conditions
for a binomial distribution.
The three conditions underlying the hypergeometric distribution are:
 The total number of items or elements (the population size) is a fixed number, a finite
population. The population size must be less than or equal to 1,750.
 The sample size (the number of trials) represents a portion of the population.
 The known initial probability of success in the population changes after each trial.
The mathematical constructs for the hypergeometric distribution are as follows:
( N x )! ( N  N x )!
x!( N x  x)! (n  x)!( N  N x  n  x)!
P ( x)  for x  Max(n  ( N  N x ),0), ..., Min(n, N x )
N!
n!( N  n)!

Mean = N x n
N

67 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Standard Deviation = ( N  N x ) N x n( N  n)
N 2 ( N  1)

Skewness = N 1
( N  N x ) N x n ( N  n)

Excess Kurtosis = complex function


The number of items in the population or Population Size (N), trials sampled or Sample Size (n),
and number of items in the population that have the successful trait or Population Successes (Nx)
are the distributional parameters. The number of successful trials is denoted x.
Input requirements:
Population Size ≥ 2 and integer.
Sample Size > 0 and integer.
Population Successes > 0 and integer.
Population Size > Population Successes.
Sample Size < Population Successes.
Population Size < 1750.

Negative Binomial The negative binomial distribution is useful for modeling the distribution of the number of
Distribution additional trials required in addition to the number of successful occurrences required (R). For
instance, in order to close a total of 10 sales opportunities, how many extra sales calls would you
need to make above 10 calls given some probability of success in each call? The x-axis shows the
number of additional calls required or the number of failed calls. The number of trials is not fixed,
the trials continue until the Rth success, and the probability of success is the same from trial to
trial. Probability of success (p) and number of successes required (R) are the distributional
parameters. It is essentially a superdistribution of the geometric and binomial distributions. This
distribution shows the probabilities of each number of trials in excess of R to produce the
required success R.
The three conditions underlying the negative binomial distribution are:
 The number of trials is not fixed.
 The trials continue until the rth success.
 The probability of success is the same from trial to trial.
The mathematical constructs for the negative binomial distribution are as follows:
( x  r  1)! r
P( x)  p (1  p) x for x  r , r  1, ...; and 0  p  1
(r  1)! x!
r (1  p )
Mean 
p
r (1  p )
Standard Deviation 
p2

68 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Skewness = 2 p
r (1  p )

Excess Kurtosis = p  6 p  6
2

r (1  p)
Probability of success (p) and required successes (R) are the distributional parameters.
Input requirements:
Successes required must be positive integers > 0 and < 8000.
Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that
probability of success (p) of 0 or 1 are trivial conditions that do not require any simulations and,
hence, are not allowed in the software.

Pascal Distribution The Pascal distribution is useful for modeling the distribution of the number of total trials
required to obtain the number of successful occurrences required. For instance, to close a total
of 10 sales opportunities, how many total sales calls would you need to make given some
probability of success in each call? The x-axis shows the total number of calls required, which
includes successful and failed calls. The number of trials is not fixed, the trials continue until the
Rth success, and the probability of success is the same from trial to trial. Pascal distribution is
related to the negative binomial distribution. Negative binomial distribution computes the
number of events required in addition to the number of successes required given some
probability (in other words, the total failures), whereas the Pascal distribution computes the total
number of events required (in other words, the sum of failures and successes) to achieve the
successes required given some probability. Successes required and probability, are the two
distributional parameters.
The three conditions underlying the negative binomial distribution are:
 The number of trials is not fixed.
 The trials continue until the rth success.
 The probability of success is the same from trial to trial.
The mathematical constructs for the Pascal distribution are shown below:
 ( x  1)!
 p S (1  p) X  S for all x  s
f ( x)   ( x  s )!( s  1)!
0 otherwise

k ( x  1)!
 p S (1  p) X  S for all x  s
F ( x)   x 1 ( x  s )!( s  1)!
0 otherwise

s
Mean 
p

Standard Deviation  s (1  p ) p 2

2 p
Skewness =
r (1  p )

69 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

p2  6 p  6
Excess Kurtosis =
r (1  p)

Successes Required and Probability are the distributional parameters.


Input requirements: Successes required > 0 and is an integer; 0 ≤ Probability ≤ 1.

Poisson The Poisson distribution describes the number of times an event occurs in a given interval, such
Distribution as the number of telephone calls per minute or the number of errors per page in a document.
The three conditions underlying the Poisson distribution are:
 The number of possible occurrences in any interval is unlimited.
 The occurrences are independent. The number of occurrences in one interval does not
affect the number of occurrences in other intervals.
 The average number of occurrences must remain the same from interval to interval.

The mathematical constructs for the Poisson are as follows:


e  x
P( x)  for x and   0
x!
Mean  

Standard Deviation = 
1
Skewness =

1
Excess Kurtosis =

Rate, or Lambda (), is the only distributional parameter.
Input requirements: Rate > 0 and ≤ 1000 (i.e., 0.0001 ≤ rate ≤ 1000).

70 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Continuous Distributions

Arcsine The arcsine distribution is U-shaped and is a special case of the beta distribution when both shape
Distribution and scale are equal to 0.5. Values close to the minimum and maximum have high probabilities of
occurrence whereas values between these two extremes have very small probabilities of
occurrence. Minimum and maximum are the distributional parameters.
The mathematical constructs for the Arcsine distribution are shown below. The probability
density function (PDF) is denoted f(x) and the cumulative distribution function (CDF) is denoted
F(x).
 1
 for 0  x  1
f ( x )    x (1  x )
0
 otherwise
0 x0
2

F ( x)   sin 1 ( x ) for 0  x  1

1 x 1

Min  Max
Mean 
2

(Max  Min)2
Standard Deviation 
8

Skewness = 0 for all inputs


Excess Kurtosis = 1.5 for all inputs
Minimum and maximum are the distributional parameters.
Input requirements:
Maximum > minimum (either input parameter can be positive, negative, or zero).

Beta Distribution The beta distribution is very flexible and is commonly used to represent variability over a fixed
range. One of the more important applications of the beta distribution is its use as a conjugate
distribution for the parameter of a Bernoulli distribution. In this application, the beta distribution
is used to represent the uncertainty in the probability of occurrence of an event. It is also used to
describe empirical data and predict the random behavior of percentages and fractions, as the
range of outcomes is typically between 0 and 1.
The value of the beta distribution lies in the wide variety of shapes it can assume when you vary
the two parameters, alpha and beta. If the parameters are equal, the distribution is symmetrical.
If either parameter is 1 and the other parameter is greater than 1, the distribution is J-shaped. If
alpha is less than beta, the distribution is said to be positively skewed (most of the values are near
the minimum value). If alpha is greater than beta, the distribution is negatively skewed (most of
the values are near the maximum value).

The mathematical constructs for the beta distribution are as follows:

71 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

f ( x) 
x ( 1) 1  x (  1) for   0;   0; x  0
 ( )(  ) 
 (   ) 
 

Mean 
 


Standard Deviation 
(   ) (1     )
2

Skewness = 2(    ) 1    
( 2     ) 

Excess Kurtosis = 3(    1)[ (    6)  2(   ) ]  3


2

 (    2)(    3)
Alpha () and beta () are the two distributional shape parameters, and  is the Gamma function.
The two conditions underlying the beta distribution are:
 The uncertain variable is a random value between 0 and a positive value.
 The shape of the distribution can be specified using two positive values.
Input requirements:
Alpha and beta both > 0 and can be any positive value.

Beta 3 and Beta 4 The original Beta distribution only takes two inputs, Alpha and Beta shape parameters. However,
Distributions the output of the simulated value is between 0 and 1. In the Beta 3 distribution, we add an extra
parameter called Location or Shift, where we are not free to move away from this 0 to 1 output
limitation, therefore the Beta 3 distribution is also known as a Shifted Beta distribution. Similarly,
the Beta 4 distribution adds two input parameters, Location or Shift, and Factor. The original
beta distribution is multiplied by the factor and shifted by the location, and, therefore the Beta 4
is also known as the Multiplicative Shifted Beta distribution.
The mathematical constructs for the Beta 3 and Beta 4 distributions are based on those in the
Beta distribution, with the relevant shifts and factorial multiplication (e.g., the PDF and CDF will
be adjusted by the shift and factor, and some of the moments, such as the mean, will similarly be
affected; the standard deviation, in contrast, is only affected by the factorial multiplication,
whereas the remaining moments are not affected at all).
Input requirements:
Location >=< 0 (location can take on any positive or negative value including zero).
Factor > 0.

Cauchy The Cauchy distribution, also called the Lorentzian or Breit-Wigner distribution, is a continuous
Distribution, or distribution describing resonance behavior. It also describes the distribution of horizontal
Lorentzian or Breit- distances at which a line segment tilted at a random angle cuts the x-axis.
Wigner Distribution The mathematical constructs for the cauchy or Lorentzian distribution are as follows:

72 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

1  /2
f ( x) 
 ( x  m) 2   2 / 4
The Cauchy distribution is a special case because it does not have any theoretical moments (mean,
standard deviation, skewness, and kurtosis) as they are all undefined.
Mode location () and scale () are the only two parameters in this distribution. The location
parameter specifies the peak or mode of the distribution, while the scale parameter specifies the
half-width at half-maximum of the distribution. In addition, the mean and variance of a Cauchy,
or Lorentzian, distribution are undefined.
In addition, the Cauchy distribution is the Student’s T distribution with only 1 degree of freedom.
This distribution is also constructed by taking the ratio of two standard normal distributions
(normal distributions with a mean of zero and a variance of one) that are independent of one
another.
Input requirements: Location (Alpha) can be any value. Scale (Beta) > 0 and can be any positive
value.

Chi-Square The chi-square distribution is a probability distribution used predominantly in hypothesis testing,
Distribution and is related to the gamma and standard normal distributions. For instance, the sum of
independent normal distributions is distributed as a chi-square () with k degrees of freedom:
d
Z12  Z 22  ...  Z k2 ~  k2
The mathematical constructs for the chi-square distribution are as follows:
0.5  k / 2 k / 21  x / 2 for all x > 0
f ( x)  x e
(k / 2)
Mean = k
Standard Deviation = 2k

Skewness = 2 2
k

Excess Kurtosis = 12
k
 is the gamma function. Degrees of freedom, k, is the only distributional parameter.
The chi-square distribution can also be modeled using a gamma distribution by setting the
k
Shape parameter equal to and the scale equal to 2S 2 where S is the scale.
2
Input requirements: Degrees of freedom > 1 and must be an integer < 300.

Cosine Distribution The cosine distribution looks like a logistic distribution where the median value between the
minimum and maximum have the highest peak or mode, carrying the maximum probability of
occurrence, while the extreme tails close to the minimum and maximum values have lower
probabilities. Minimum and maximum are the distributional parameters.

73 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

The mathematical constructs for the Cosine distribution are shown below:
1 xa
 cos  for min  x  max
f ( x)   2b  b 
0
 otherwise
min  max max  min
where a  and b 
2 
1   x  a 
 1  sin    for min  x  max
F ( x)   2   b 
1
 for x > max
Min  Max
Mean 
2

(Max  Min)2 ( 2  8)
Standard Deviation =
4 2
Skewness is always equal to 0
6(90   4 )
Excess Kurtosis =
5( 2  6) 2

Minimum and maximum are the distributional parameters.


Input requirements:
Maximum > minimum (either input parameter can be positive, negative, or zero).

Double Log The double log distribution looks like the Cauchy distribution where the central tendency is
Distribution peaked and carries the maximum value probability density but declines faster the further it gets
away from the center, creating a symmetrical distribution with an extreme peak in between the
minimum and maximum values. Minimum and maximum are the distributional parameters.
The mathematical constructs for the Double Log distribution are shown below:
 1  x  a 
 ln   for min  x  max
f ( x)   2b  b 

0 otherwise
min  max max  min
where a  and b 
2 2
1 xa   x  a 
   1  ln    for min  x  a
2  2b    b  
F ( x)  
1 xa   x  a 
2    1  ln 
2b  
  for a  x  max
   b  
Min  Max
Mean =
2

( Max  Min)2
Standard Deviation =
36

74 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Skewness is always equal to 0


Excess Kurtosis is a complex function and not easily represented
Minimum and maximum are the distributional parameters.
Input requirements:
Maximum > minimum (either input parameter can be positive, negative, or zero).

Erlang Distribution The Erlang distribution is the same as the Gamma distribution with the requirement that the
Alpha or shape parameter must be a positive integer. An example application of the Erlang
distribution is the calibration of the rate of transition of elements through a system of
compartments. Such systems are widely used in biology and ecology (e.g., in epidemiology, an
individual may progress at an exponential rate from being healthy to becoming a disease carrier,
and continue exponentially from being a carrier to being infectious). Alpha (also known as shape)
and Beta (also known as scale) are the distributional parameters.
The mathematical constructs for the Erlang distribution are shown below:
  x  1  x / 
  e
 
f ( x)     for x  0
  (  1)
0 otherwise

  1
( x /  )i
1  e 
 x/
for x  0
F ( x)   i 0 i!
0
 otherwise
Mean  

Standard Deviation   2

2
Skew 

6
Excess Kurtosis  3

Alpha and Beta are the distributional parameters.


Input requirements:
Alpha (Shape) > 0 and is an Integer
Beta (Scale) > 0

Exponential The exponential distribution is widely used to describe events recurring at random points in time,
Distribution such as the time between failures of electronic equipment or the time between arrivals at a service
booth. It is related to the Poisson distribution, which describes the number of occurrences of an
event in a given interval of time. An important characteristic of the exponential distribution is the
“memoryless” property, which means that the future lifetime of a given object has the same

75 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

distribution regardless of the time it existed. In other words, time has no effect on future
outcomes.
The condition underlying the exponential distribution is:
 The exponential distribution describes the amount of time between occurrences.
The mathematical constructs for the exponential distribution are as follows:
f ( x)  e  x for x  0;   0

Mean = 1

Standard Deviation = 1

Skewness = 2 (this value applies to all success rate inputs)
Excess Kurtosis = 6 (this value applies to all success rate inputs)
Success rate () is the only distributional parameter. The number of successful trials is denoted
x.
Input requirements: Rate > 0.

Exponential 2 The Exponential 2 distribution uses the same constructs as the original Exponential distribution
Distribution but adds a Location or Shift parameter. The Exponential distribution starts from a minimum
value of 0, whereas this Exponential 2 or Shifted Exponential, distribution shifts the starting
location to any other value.
Rate, or Lambda, and Location, or Shift, are the distributional parameters.
Input requirements:
Rate (Lambda) > 0.
Location can be any positive or negative value including zero.

Extreme Value The extreme value distribution (Type 1) is commonly used to describe the largest value of a
Distribution, or response over a period of time, for example, in flood flows, rainfall, and earthquakes. Other
Gumbel applications include the breaking strengths of materials, construction design, and aircraft loads
Distribution and tolerances. The extreme value distribution is also known as the Gumbel distribution.
The mathematical constructs for the extreme value distribution are as follows:
x 
1
f ( x)  ze Z
where z  e 
for   0; and any value of x and 

Mean =   0.577215

Standard Deviation = 1 2 2
 
6

Skewness = 12 6 (1.2020569 )  1.13955 (this applies for all values of mode and scale)
3

76 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Excess Kurtosis = 5.4 (this applies for all values of mode and scale)
Mode () and scale () are the distributional parameters.
Calculating Parameters
There are two standard parameters for the extreme value distribution: mode and scale. The mode
parameter is the most likely value for the variable (the highest point on the probability
distribution). After you select the mode parameter, you can estimate the scale parameter. The
scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance.
Input requirements:
Mode Alpha can be any value.
Scale Beta > 0.

F Distribution, or The F distribution, also known as the Fisher-Snedecor distribution, is another continuous
Fisher-Snedecor distribution used most frequently for hypothesis testing. Specifically, it is used to test the statistical
Distribution difference between two variances in analysis of variance tests and likelihood ratio tests. The F
distribution with the numerator degree of freedom n and denominator degree of freedom m is
related to the chi-square distribution in that:

 n2 / n d
~ Fn,m
 m2 / m
m
Mean =
m2

Standard Deviation = 2m (m  n  2) for all m > 4


2

n(m  2) 2 (m  4)

Skewness = 2(m  2n  2) 2(m  4)


m6 n(m  n  2)

12(16  20m  8m 2  m3  44n  32mn  5m 2 n  22n 2  5mn2


Excess Kurtosis =
n(m  6)(m  8)(n  m  2)
The numerator degree of freedom n and denominator degree of freedom m are the only
distributional parameters.
Input requirements:
Degrees of freedom numerator & degrees of freedom denominator must both be integers > 0

Gamma The gamma distribution applies to a wide range of physical quantities and is related to other
Distribution distributions: lognormal, exponential, Pascal, Erlang, Poisson, and chi-square. It is used in
(Erlang meteorological processes to represent pollutant concentrations and precipitation quantities. The
Distribution) gamma distribution is also used to measure the time between the occurrence of events when the
event process is not completely random. Other applications of the gamma distribution include
inventory control, economic theory, and insurance risk theory.

77 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

The gamma distribution is most often used as the distribution of the amount of time until the
Rth occurrence of an event in a Poisson process. When used in this fashion, the three conditions
underlying the gamma distribution are:
 The number of possible occurrences in any unit of measurement is not limited to a fixed
number.
 The occurrences are independent. The number of occurrences in one unit of
measurement does not affect the number of occurrences in other units.
 The average number of occurrences must remain the same from unit to unit.
The mathematical constructs for the gamma distribution are as follows:
 1 x
x 
  e 

f ( x)    with any value of   0 and   0
( ) 

Mean = 
Standard Deviation =  2

Skewness = 2

Excess Kurtosis = 6

Shape parameter alpha () and scale parameter beta () are the distributional parameters, and 
is the Gamma function. When the alpha parameter is a positive integer, the gamma distribution
is called the Erlang distribution, used to predict waiting times in queuing systems, where the
Erlang distribution is the sum of independent and identically distributed random variables each
having a memoryless exponential distribution. Setting n as the number of these random variables,
the mathematical construct of the Erlang distribution is:
x n 1e  x for all x > 0 and all positive integers of n
f ( x) 
(n  1)!

Input requirements:
Scale beta > 0 and can be any positive value.
Shape alpha ≥ 0.05 and any positive value.
Location can be any value.

Laplace The Laplace distribution is also sometimes called the double exponential distribution because it
Distribution can be constructed with two exponential distributions (with an additional location parameter)
spliced together back-to-back, creating an unusual peak in the middle. The probability density
function of the Laplace distribution is reminiscent of the normal distribution. However, whereas
the normal distribution is expressed in terms of the squared difference from the mean, the
Laplace density is expressed in terms of the absolute difference from the mean, making the
Laplace distribution’s tails fatter than those of the normal distribution. When the location
parameter is set to zero, the Laplace distribution’s random variable is exponentially distributed

78 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

with an inverse of the scale parameter. Alpha (also known as location) and Beta (also known as
scale) are the distributional parameters.
The mathematical constructs for the Laplace distribution are shown below:
1  x  
f ( x)  exp   
2   
1  x  
 exp   when x  
 2   
F ( x)  
1  1 exp   x    when x  
 2   
 

Mean  
Standard Deviation  1.4142
Skewness is always equal to 0 as it is a symmetrical distribution
Excess Kurtosis is always equal to 3
Input requirements:
Alpha (Location) can take on any positive or negative value including zero.
Beta (Scale) > 0.

Logistic The logistic distribution is commonly used to describe growth, that is, the size of a population
Distribution expressed as a function of a time variable. It also can be used to describe chemical reactions and
the course of growth for a population or individual.
The mathematical constructs for the logistic distribution are as follows:
 x

e 
and 
f ( x)  2
for any value of 
 x
 
 1  e 

 
Mean  
1 2 2
Standard Deviation   
3

Skewness = 0 (this applies to all mean and scale inputs)


Excess Kurtosis = 1.2 (this applies to all mean and scale inputs)
Mean () and scale () are the distributional parameters.
Calculating Parameters
There are two standard parameters for the logistic distribution: mean and scale. The mean
parameter is the average value, which for this distribution is the same as the mode because this is
a symmetrical distribution. After you select the mean parameter, you can estimate the scale
parameter. The scale parameter is a number greater than 0. The larger the scale parameter, the
greater the variance.

79 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Input requirements:
Scale Beta > 0 and can be any positive value.
Mean Alpha can be any value.

Lognormal The lognormal distribution is widely used in situations where values are positively skewed, for
Distribution example, in financial analysis for security valuation or in real estate for property valuation, and
where values cannot fall below zero.
Stock prices are usually positively skewed rather than normally (symmetrically) distributed. Stock
prices exhibit this trend because they cannot fall below the lower limit of zero but might increase
to any price without limit. Similarly, real estate prices illustrate positive skewness as property
values cannot become negative.
The three conditions underlying the lognormal distribution are:
 The uncertain variable can increase without limits but cannot fall below zero.
 The uncertain variable is positively skewed, with most of the values near the lower limit.
 The natural logarithm of the uncertain variable yields a normal distribution.
Generally, if the coefficient of variability is greater than 30%, use a lognormal distribution.
Otherwise, use the normal distribution.
The mathematical constructs for the lognormal distribution are as follows:
[ln( x )  ln(  )]2

1 2[ln( )]2
f ( x)  e for x  0;   0 and   0
x 2 ln( )

 2 
Mean  exp    
 2 

Standard Deviation = exp  2   exp   1


2 2
 
Skewness =  exp 2
  1 (2  exp( 2
))

Excess Kurtosis = exp4   2 exp3   3 exp2   6


2 2 2

Mean () and standard deviation () are the distributional parameters.
Input requirements:
Mean and standard deviation both > 0 and can be any positive value.
Lognormal Parameter Sets
By default, the lognormal distribution uses the arithmetic mean and standard deviation. For
applications for which historical data are available, it is more appropriate to use either the
logarithmic mean and standard deviation, or the geometric mean and standard deviation.

80 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Lognormal 3 The Lognormal 3 distribution uses the same constructs as the original Lognormal distribution
Distribution but adds a Location, or Shift, parameter. The Lognormal distribution starts from a minimum
value of 0, whereas this Lognormal 3, or Shifted Lognormal distribution shifts the starting
location to any other value.
Mean, Standard Deviation, and Location (Shift) are the distributional parameters.
Input requirements:
Mean > 0.
Standard Deviation > 0.
Location can be any positive or negative value including zero.

Normal The normal distribution is the most important distribution in probability theory because it
Distribution describes many natural phenomena, such as people’s IQs or heights. Decision makers can use
the normal distribution to describe uncertain variables such as the inflation rate or the future price
of gasoline.
The three conditions underlying the normal distribution are:
 Some value of the uncertain variable is the most likely (the mean of the distribution).
 The uncertain variable could as likely be above the mean as it could be below the mean
(symmetrical about the mean).
 The uncertain variable is more likely to be in the vicinity of the mean than further away.
The mathematical constructs for the normal distribution are as follows:
( x )2
1 
f ( x)  e 2 2
for all values of x
2  and ; while  > 0
Mean  
Standard Deviation  
Skewness = 0 (this applies to all inputs of mean and standard deviation)
Excess Kurtosis = 0 (this applies to all inputs of mean and standard deviation)
Mean () and standard deviation ) are the distributional parameters.
Input requirements:
Standard deviation > 0 and can be any positive value.
Mean can take on any value.

Parabolic The parabolic distribution is a special case of the beta distribution when Shape = Scale = 2. Values
Distribution close to the minimum and maximum have low probabilities of occurrence, whereas values
between these two extremes have higher probabilities or occurrence. Minimum and maximum
are the distributional parameters.
The mathematical constructs for the Parabolic distribution are shown below:

81 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

f ( x) 
x ( 1) 1  x (  1) for   0;   0; x  0
 ( )(  ) 
 (   ) 
 

Where the functional form above is for a Beta distribution, and for a Parabolic function, we set
Alpha = Beta = 2 and a shift of location in Minimum, with a multiplicative factor of (Maximum
– Minimum).
Min  Max
Mean =
2

( Max  Min ) 2
Standard Deviation =
20

Skewness = 0

Excess Kurtosis = –0.8571


Minimum and Maximum are the distributional parameters.
Input requirements:
Maximum > minimum (either input parameter can be positive, negative, or zero).

Pareto Distribution The Pareto distribution is widely used for the investigation of distributions associated with such
empirical phenomena as city population sizes, the occurrence of natural resources, the size of
companies, personal incomes, stock price fluctuations, and error clustering in communication
circuits.
The mathematical constructs for the Pareto are as follows:
 L
f ( x)  for x  L
x (1  )
L
mean 
 1
L2
standard deviation 
(   1) 2 (   2)

  2  2(   1) 
skewness =     3 

6(  3   2  6   2)
excess kurtosis =  (   3)(  4)
Shape () and Location () are the distributional parameters.
Calculating Parameters
There are two standard parameters for the Pareto distribution: location and shape. The location
parameter is the lower bound for the variable. After you select the location parameter, you can
estimate the shape parameter. The shape parameter is a number greater than 0, usually greater

82 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

than 1. The larger the shape parameter, the smaller the variance and the thicker the right tail of
the distribution.
Input requirements: Location > 0 and can be any positive value; Shape ≥ 0.05.

Pearson V The Pearson V distribution is related to the Inverse Gamma distribution, where it is the reciprocal
Distribution of the variable distributed according to the Gamma distribution. Pearson V distribution is also
used to model time delays where there is almost certainty of some minimum delay and the
maximum delay is unbounded, for example, delay in arrival of emergency services and time to
repair a machine. Alpha (also known as shape) and Beta (also known as scale) are the
distributional parameters.
The mathematical constructs for the Pearson V distribution are shown below:
x  ( 1) e  / x
f ( x) 
  ( )
( ,  / x)
F ( x) 
( )


Mean 
 1

2
Standard Deviation 
(  1)2 (  2)

4  2
Skew 
 3

30  66
Excess Kurtosis  3
(  3)(  4)

Input requirements:
Alpha (Shape) > 0.
Beta (Scale) > 0.

Pearson VI The Pearson VI distribution is related to the Gamma distribution, where it is the rational function
Distribution of two variables distributed according to two Gamma distributions. Alpha 1 (also known as shape
1), Alpha 2 (also known as shape 2), and Beta (also known as scale) are the distributional
parameters.
The mathematical constructs for the Pearson VI distribution are shown below:
( x /  )1 1
f ( x) 
 (1 ,  2 )[1  ( x /  )]1  2
 x 
F ( x)  FB  
 x 

83 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

1
Mean 
2 1

 21 (1   2  1)
Standard Deviation =
( 2  1)2 ( 2  2)

2  2  21   2  1 
Skew  2
1 (1   2  1)   2  3 

3( 2  2)  2( 2  1)2 


Excess Kurtosis    ( 2  5)   3
( 2  3)( 2  4)  1 (1   2  1) 
Input requirements:
Alpha 1 (Shape 1) > 0.
Alpha 2 (Shape 2) > 0.
Beta (Scale) > 0.

PERT Distribution The PERT distribution is widely used in project and program management to define the worst-
case, nominal-case, and best-case scenarios of project completion time. It is related to the Beta
and Triangular distributions. PERT distribution can be used to identify risks in project and cost
models based on the likelihood of meeting targets and goals across any number of project
components using minimum, most likely, and maximum values, but it is designed to generate a
distribution that more closely resembles realistic probability distributions. The PERT distribution
can provide a close fit to the normal or lognormal distributions. Like the triangular distribution,
the PERT distribution emphasizes the "most likely" value over the minimum and maximum
estimates. However, unlike the triangular distribution, the PERT distribution constructs a smooth
curve that places progressively more emphasis on values around (near) the most likely value, in
favor of values around the edges. In practice, this means that we "trust" the estimate for the most
likely value, and we believe that even if it is not exactly accurate (as estimates seldom are), we have
an expectation that the resulting value will be close to that estimate. Assuming that many real-
world phenomena are normally distributed, the appeal of the PERT distribution is that it
produces a curve similar to the normal curve in shape, without knowing the precise parameters
of the related normal curve. Minimum, Most Likely, and Maximum are the distributional
parameters.
The mathematical constructs for the PERT distribution are shown below:
( x  min) A11 (max  x) A21
f ( x) 
B( A1, A2)(max  min) A1 A21
 min  4(likely)  max   min  4(likely)  max 
  min   max  
where A1  6  6 6
 and A2  6  
 max  min   max  min 
   
and B is the Beta function

Min  4Mode  Max


Mean 
6

84 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

(   Min)( Max   )
Standard Deviation 
7

7  Min  Max  2 
Skew   
(  Min)(Max   )  4 

Input requirements:
Minimum ≤ Most Likely ≤ Maximum and can be positive, negative, or zero.

Power Distribution The Power distribution is related to the exponential distribution in that the probability of small
outcomes is large but exponentially decreases as the outcome value increases. Alpha (also known
as shape) is the only distributional parameter.
The mathematical constructs for the Power distribution are shown below:
f ( x )   x 1
F ( x )  x


Mean 
1 


Standard Deviation 
(1   ) 2 (2   )

  2  2(  1) 
Skew 
    3 

Excess Kurtosis is a complex function and cannot be readily computed


Input requirements: Alpha > 0.

Power 3 The Power 3 distribution uses the same constructs as the original Power distribution but adds a
Distribution Location, or Shift, parameter, and a multiplicative Factor parameter. The Power distribution
starts from a minimum value of 0, whereas this Power 3, or Shifted Multiplicative Power,
distribution shifts the starting location to any other value.
Alpha, Location or Shift, and Factor are the distributional parameters.
Input requirements:
Alpha > 0.05.
Location, or Shift, can be any positive or negative value including zero.
Factor > 0.

85 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Student’s t The Student’s t distribution is the most widely used distribution in hypothesis test. This
Distribution distribution is used to estimate the mean of a normally distributed population when the sample
size is small to test the statistical significance of the difference between two sample means or
confidence intervals for small sample sizes.
The mathematical constructs for the t distribution are as follows:
 [(r  1) / 2]
f (t )  (1  t 2 / r ) ( r 1) / 2
r  [ r / 2]
Mean = 0 (this applies to all degrees of freedom r except if the distribution is shifted to another
nonzero central location)
r
Standard Deviation =
r2
Skewness = 0 (this applies to all degrees of freedom r)
6
Excess Kurtosis = for all r  4
r4
xx
where t  and  is the gamma function.
s
Degrees of freedom r is the only distributional parameter.
The t distribution is related to the F distribution as follows: the square of a value of t with r degrees
of freedom is distributed as F with 1 and r degrees of freedom. The overall shape of the
probability density function of the t distribution also resembles the bell shape of a normally
distributed variable with mean 0 and variance 1, except that it is a bit lower and wider or is
leptokurtic (fat tails at the ends and peaked center). As the number of degrees of freedom grows
(say, above 30), the t distribution approaches the normal distribution with mean 0 and variance
1.
Input requirements:
Degrees of freedom ≥ 1 and must be an integer.

Triangular The triangular distribution describes a situation where you know the minimum, maximum, and
Distribution most likely values to occur. For example, you could describe the number of cars sold per week
when past sales show the minimum, maximum, and usual number of cars sold.
The three conditions underlying the triangular distribution are:
 The minimum number of items is fixed.
 The maximum number of items is fixed.
 The most likely number of items falls between the minimum and maximum values,
forming a triangular-shaped distribution, which shows that values near the minimum
and maximum are less likely to occur than those near the most-likely value.
The mathematical constructs for the triangular distribution are as follows:

86 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

 2( x  Min )
 ( Max  Min )( Likely  min) for Min  x  Likely
f ( x)  
 2( Max  x )
for Likely  x  Max
 ( Max  Min )( Max  Likely )

Mean = 1 (Min  Likely  Max)


3

Standard Deviation = 1
( Min 2  Likely 2  Max 2  Min Max  Min Likely  Max Likely)
18

Skewness = 2 ( Min  Max  2 Likely)(2Min  Max  Likely)(Min  2Max  Likely)


5( Min 2  Max 2  Likely 2  MinMax  MinLikely  MaxLikely) 3 / 2
Excess Kurtosis = –0.6 (this applies to all inputs of Min, Max, and Likely)
Minimum value (Min), most-likely value (Likely), and maximum value (Max) are the distributional
parameters.
Input requirements:
Min ≤ Most Likely ≤ Max and can take any value.
However, Min < Max and can take any value.

Uniform With the uniform distribution, all values fall between the minimum and maximum and occur
Distribution with equal likelihood.
The three conditions underlying the uniform distribution are:
 The minimum value is fixed.
 The maximum value is fixed.
 All values between the minimum and maximum occur with equal likelihood.
The mathematical constructs for the uniform distribution are as follows:
1
f ( x)  for all valuessuch that Min  Max
Max  Min
Min  Max
Mean 
2
( Max  Min) 2
Standard Deviation 
12
Skewness = 0 (this applies to all inputs of Min and Max)
Excess Kurtosis = –1.2 (this applies to all inputs of Min and Max)
Maximum value (Max) and minimum value (Min) are the distributional parameters.
Input requirements: Min < Max and can take any value.

87 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

Weibull The Weibull distribution describes data resulting from life and fatigue tests. It is commonly used
Distribution to describe failure time in reliability studies as well as the breaking strengths of materials in
(Rayleigh reliability and quality control tests. Weibull distributions are also used to represent various physical
Distribution) quantities, such as wind speed.
The Weibull distribution is a family of distributions that can assume the properties of several
other distributions. For example, depending on the shape parameter you define, the Weibull
distribution can be used to model the exponential and Rayleigh distributions, among others. The
Weibull distribution is very flexible. When the Weibull shape parameter is equal to 1.0, the
Weibull distribution is identical to the exponential distribution. The Weibull location parameter
lets you set up an exponential distribution to start at a location other than 0.0. When the shape
parameter is less than 1.0, the Weibull distribution becomes a steeply declining curve. A
manufacturer might find this effect useful in describing part failures during a burn-in period.
The mathematical constructs for the Weibull distribution are as follows:

 1  x
 x   
 
f ( x)    e
  
Mean    (1   1 )
Standard Deviation   2   (1  2 1 )   2 (1   1 ) 

Skewness = 2 (1   )  3(1   )(1  2 )  (1  3 )


3 1 1 1 1

(1  2 1 )   2 (1   1 )3 / 2

Excess Kurtosis =
 6 4 (1   1 )  12 2 (1   1 )(1  2  1 )  3 2 (1  2  1 )  4(1   1 )(1  3 1 )  (1  4  1 )
(1  2 1
)   2 (1   1 ) 
2

Shape () and central location scale () are the distributional parameters, and  is the Gamma
function.
Input requirements: Shape Alpha ≥ 0.05. Scale Beta > 0 and can be any positive value.

Weibull 3 The Weibull 3 distribution uses the same constructs as the original Weibull distribution but adds
Distribution a Location, or Shift, parameter. The Weibull distribution starts from a minimum value of 0,
whereas this Weibull 3, or Shifted Weibull, distribution shifts the starting location to any other
value.
Alpha, Beta, and Location or Shift are the distributional parameters.
Input requirements:
Alpha (Shape) ≥ 0.05.
Beta (Central Location Scale) > 0 and can be any positive value.
Location can be any positive or negative value including zero.

88 | P a g e
C M O L S O F T W A R E U S E R M A N U A L

© Copyright 2005-2015 Dr. Johnathan Mun


All rights reserved.
Real Options Valuation, Inc.
4101F Dublin Blvd., Ste. 425
Dublin, California 94568 U.S.A.
Phone 925.271.4438 • Fax 925.369.0450
admin@realoptionsvaluation.com
www.risksimulator.com
www.realoptionsvaluation.com

89 | P a g e

Das könnte Ihnen auch gefallen