Sie sind auf Seite 1von 8

top down credit risk,

its models
and what corporate credit risk modeling desk can do about it?

1. risk management and shareholders' value in banking


part III Credit Risk
10. Credit-scoring Models
10.1 Introduction
10.2 Linear discriminant analysis
10.3 Regression Models
10.4 Inductive models
11. Capital Market Models
11.1 Introduction
11.2 The approach based on corporate bond spreads
11.3 Structural models based on stock prices
Appendix 11A Calculating the Fair Spread on a Loan
Appnedix 11B Real and Risk-Neutral Probabilities of Default
12. LGD and Recovery Risk
12.1 Introduction
12.2 What factors drive recovery rates?
12.3 The estimation of recovery rates
12.4 From past data to LGD estimates
12.5 Results from selected empirical studies
12.6 Recovery risk
12.7 The link between default risk and recovery risk
13. Rating Systems
13.1 Introduction
13.2 Rating assignment
13.3 Rating quantification
13.4 Rating validation
14. Portfolio Models
14.1 Introduction
14.2 Selecting time horizon and confidence level
14.3 The migration approach
14.4 The structural approach: portfoliomanager
14.5 The macroeconomic approach: CreditPortfolioView
14.6 The actuarial approach: CreditRisk+
14.7 A Brief comparison of the main models
14.8 Some limitations of the credit risk models
15. Some applications of credit risk measurement models
15.1 Introduction
15.2 Loan Pricing
15.3 Risk-adjusted performance measurement
15.4 Setting limit on risk-taking units
15.5 Optimizing the composition of the loan portfolio
Appendix 15A Credit Risk Transfer Tools
16 Counterparty Risk on OTC
16.1 Introduction
16.2 Settlement and pre-settlement risk
16.3 Estimating pre-settment risk
16.4 Risk-adjusted performance measurement
16.5 Risk-mitigation tools for pre-settment risk
in practice, the higher expected loss connected with the undrawn portions is usu
ally covered by a commission proportional to the UP,
this commission

the distinction between expected and unexpected loss is important when dealing w
ith a diversified portfolio of exposures. The expected loss
Appendix 3A
A1 Derivation of Least Squares Estimates
A2 Linearity and Unbiasedness Properties of Least-Squares Estimators
A3 Variances and Standard Errors of Least-Squares Estimators
A4 Covariance between Beta1 and Beta2
A5 The Least-Squares Estimatorsof Beta2
A6 Minimum-Variance Property of Least-Squares Estimators
A7 Consistency of Least-Squares Estimators
5.1 Statistical Prerequisites
probability
probability distributions
Type I and Type II errors
level of siginificance
power of a statistical test
confidence interval
5.2
5.3
5.4
5.5

Interval Estimation: Some Basic Ideas


Confidence Intervals for Regression Coefficients beta1 and beta2
Confidence Interval for squared standard deviation
Hypothesis Testing: General Comments
null hypothesis
alternative hypothsis
Maintained hypothesis
simple/ composite
confidence interval/ test of significance
5.6 hypothesis testing: the confidence-interval approach
Two--sided or two-tail-test
One-sidedd or one-tail test
5.7 Hypothesis Testing: The Test of significance approach
Testing the significance of Regression Coefficients: The t test
Testing the siginificance of squared std: The Chi-squared test
5.8 Hypothesis Testing: Some practical aspects
The Exact Level of Significance: The p Value
Statistical Significance v.s Practical Significance
The Choice between Confidence Intervaland Test-of-Significance Approaches to Hyp
othesis Testing
5.9 Regression Analysis and Analysis of Variance
Chapter 6 Extensions of the Two-variable Linear Regression Model
6.1 Regression through the Origin
1st we consider the case of regresison through the origin, that is, a situation
where intercept term, beta1, isabsent from the model
2nd we consider the question of the Units of Measurement, that is, how the y and
X variables are measured andwhether a change inthe units of measurement affects
the regression results
6.2
6.3
6.4
6.5

Scaling and Units of Measurement


Regression on Standardized Variables
Functional forms of Regression Models
How to Measure Elasticity: The Log-linear Model

6.6 semilog model: log-lin and lin-log models


how to measure the growth rate: the log-lin model
The lin-log model
6.7 Reciprocal Models
6.8 Choice of functional form
*6.9 A note on the Nature of Stochastic Error Term: Additive versus Multiplicati
ve Stochastic Error Term
2. data analysis and statistical inferences
Sumamry and Conclusions
regression analysis
discriminant analysis
cluster analysis
vintage analysis/cohort analysis
survival analysis
For economics/Finance
game theory analysis: thinking strategically

I can't know everything


Chon vic nho, vic d.
oc sach ngn, it, xuc tich
bucketing approach based on a survival function. Factors are divided into bucket
s depending on their survival functon instead of default rate
a new transfomration of variables based on the suvival function is developed. Th
is procedsureis called the logrank transformation
another new procedure for the assessment for the predictive power of an individu
al factor is developed, again based on the survival function
selection procedures are changed to select the most predicting factors forthe su
rvival model
proceduresfor the incorporation of different observations per facility due to ne
w assessments. Facilities in a portflio are assessed on a interval bases, method
ologies for the incorporation of this data per facility are developed
procedures for the incorporation of different observations per facility due to n
ew assessments. Facilities in a portfolio are assessed ona interval bases, metho
dologies for the incorporation of this data per facility are developed.
The survival model developed is a Cox proportional hazard model. The performance
ofthe model is compared to the logistic regression techniques.
The logrank transformation outperforms the logistic transformation and the stati
stical optimal approach, because it is more significant in predicting the surviv
al probability based on the Wald test statistic.

Furthermore, the results of the ROC/AUC, power statistics and KS statistic showe
d there is little differencein the performance of the survivalmodels and the log
istic regression
The model shows no imporvement in performance but has certain advantages compare
d to the current model. This model requires significantless data cleaning becaus
e of the model estimates the survival probability over the entire data set, in c
ontrast to logistic regression that only estimates the survival probablity for a
fixed time interval.
Some remarks for further researchare the incorporation of truncation into the su
rvival functions. This is another type of missing data and is not developed beca
use it was beyond the scope of this thesis. Furthermore, the logrank transformat
ion outperforms the logrank transformation and is recommended. This should be re
searched further
censored data
truncated data
3.3 Types of survival models
3.3.1 Kaplan Meier estimator
3.3.2 Parametric models
3.3.3 Accelerated failure time
3.3.4 Fully parametric proportional hazards model
3.3.5 Cox proportional hazards model
3.4 Comparison
although between the different models there are similarities, the models are ver
y different
The advantage of the KM estimator are that it is easy to compute and to interpre
t. Furthermore it doesn't reuire any assumptions about a baseline. One of the ma
in drawbanks of this estimator is that it doesn't account for variables that are
related to the survival time
It is a descriptive estimator and only describes the estimation of the survival
function of the population. Therefore it is only an appicable to homegenous samp
les. this model can be used in order to get a quick impression of the survival f
unction of a population
the AFT model assumes that a covariate is able to accelerate or decelerate the
ime to a certain event by some constant. These models have two main advatnges:
hey are very easy interpreted and are more robust to mitted covariates and the
ess affected by the choice of probability ditribution compared to proportional
azards model

t
t
l
h

The bsic idea behind proportional hazard models is that the effect of the covari
ate is to multiply the baseline hazard by some constant. In order to use these m
odels the proportional hazards assumption should hold. This assumption states th
at the risk of default of different groups is constant over time. For example if
at the start facility 1 has a risk of default twice as hihigh as Facility 2, th
e the risk of default for Facility 1 should twise as high everywhere in time. Th
ere are 2 types of PH models: parametric PH models and COx PH models. The differ
ence is that the parametric PH models assume the baseline hazard function follow
s a specific distribution whereas the Cox model does not make assumptions about
the baseline. The Cox model makes estimations on the basis of the rank of the su
vival times
For the popularity of the Cox PH model are several reasons

First the model does not require any assumptions about the baseline, the model i
s robust, flexible and a safe choice in many cases. Furthermore, the model is ca
pable of handling discrete and continuous measures of event times and is it poss
ible to incorporate time-dependent covariates, in order to correct for changes i
n value of covariates over the course of the observation periods.
The Cox PH model is chosenfor the development o the model
In traditional approaches the split of the factors was based upon good-bad ratio
(default rate) or similar measures.
Tong et al stratified on the home
1. distribution of American baseball players' salaries in 1994. The horizontal a
xis shows salaries in millions of dollars, andth
chapter 8 Multiple regression analysi: the problem of inference
8.3 hypothesis Testing about individual regression coefficients
8.4 Testing the overall significance of the sample regression
The Analysis of Variance Approach to Testing the Overall Signifiance of an Obser
ved Multiple Regression: the F Test
Testing the Overall Signifiance of a Multiple Regression: The F Test
An Important Relationship between R2 and F-test
Testing the Overall Signifiance of a Multiple Regression in terms of R2
Tesing the Equality of Two Regression Coefficients
1. This chapter extended and refined the ideas of interval estimation and hypoth
esis testing first introduced in Chapter 5 in the context of the two-variable li
near regression model
2. In a multiple regression, testing the individual significance of a partial re
gression coefficient (using the t test) and testing the overall signifance of th
e regression (all partial slope coefficients are zero or R2=0) are not the same
thing.
3. In particular, the finding that one or more partial regression coefficients a
re statistically insignificant on the basis of the individual t test does not me
an that all partial regression coefficients are also (collectively) statistical
insignificant. The latter hypothesis can be tested only by the F test.
4. the F test is versatile in that it can test a variety of hypotheses,such as w
hether
(1) an individual regression coefficient is statistically significant
(2) all partial slope coefficients are zero
(3) two or more coefficients are statistically equal
(4) the coefficients satisfy some linear restrictions
(5) there is structural stability of the regression model
5. as in the two-variable case, the multiple regression model can be used for th
e purpose of mean and/or individual predictions
Chapter 9 Dummy Variable Regression Models

In Chapter 1 we discussed
several topics related to dummy variables are discussed in the literature that a
re rather advanced, including
(1) random, or vayring parameters models
(2) switching regression models
(3) disequilibrium models
In the regression models considered in this text it is assumed that the paramete
rs,the beta's are unkown but fixed entities. The random coefficient models-and t
here are several versions of them--asume the beta's can be random too. A major r
eference work in this area is by Swamy.
10.1 The Nature of Multicollinearity
10.2 Estimation in the Presence of Perject Multicollinearity
10.3 Estimation in the Presence of "High" but "Imperfect" Multicollinearity
10.4 Multicollnearity: Much Ado about nothing? Theoretical Consequences of Multi
collinearity
10.5 Practical Consequences of Multicollnearity
Large Variances and Covariances of OLS Estimators
Wider Confidence Intervals
"Insignificant" t Ratios
Variance-inflating Factor (VIF)
A High R2 but few significant t Ratios
Sensitivity of OLS Estimators and Their Standard Errors to Small Changes in Data
Consequences of Micronumerosity
1. One of the assumptions of the classical linear regression model is that there
is multi-collinearity among the explanatory variables, the X's. Broadly interpr
eted, multi-collinearity refers to the situation where there is either an exact
approximately exact linearity relationship among the X variables
2. The consequences of multicollinearity are as follows: if there is perfect col
linearity among the X's, theier regression coefficients are, there regression co
efficients are indeterminate and their standard errors are not defined. If colli
nearity is high but not perfect, estimation of regression coefficients is possib
le but their standard errors tend to be large. As a result, the population value
s of the coefficients cannot be estimated precisely. However, if the objective i
s to estimate linear combinations of these coefficients, the estimable functions
, this can be done even in the resence of perfect multicollinearity
3. Although there are no sure methods of detecting collinearity, there are sever
al indicators of it, which are as follows:
(a) the clearest sign of multicollinearity is when R2 is very high butnon of the
regression coefficients is statiscally significant on the basis of the conventi
onal t test. This case is, of course, extreme.
(b) In models involving just two explanatory variables, a fairly good idea of cl
iineatiy can be obtained by examining the zero-order, or simple, correlation coe
fficient between the two variables. If this correlation is high, multicollineari
ty is generally the culprit
(c) However, the zero-order correlation coefficients can be misleading inmodels
involving more than 2 variables isnce it is possible to have low zero-order corr
elations and yet find high multicollinearity. In situations like these, one may
need to examine the partial correlations coefficients

(d) If R2 is high but the partial correlationsare low, multicollinearity is a po


ssibility. Here one or more variables may be superfoluous. But if R2 is high and
thepartial correlations are also high, multicollinearity may not be readily det
ectable.
Chapter 11 Heteroscedasiticity: What Happens if the Error Variance is Nonconstan
t
Remedial Measures:
There are two approaches to remediation:
when variance is known
and when variance is not known
When variance is known: The method of Weighted Least Squares
When variance is not known
Assumption
Assumption
mation
Assumption
e of Y
Assumption
very often

1: The error variance is proportional to X2


2: Ther error variance is proportional to X. The square root transfor
3: Ther error variance is proportional to the square of the mean valu
4: A log tranformation such as
reduces heteroscedasticity when compared with the regression

11.8 A Caution about Overreacting to Hereosecdasiticty


1. A critical assumption of the classical linear regression model is that the di
sturbance ui have all the same variance, variance. if this assumption is not sat
isfied, there is heteroscedasticity
2. heterosedasticity does not destroy the unbiasedness and consistency propertie
s of OLS estmators
3. But these
Autocorrelation: What happens if the error terms are correlated?
Summary and Conclusions:
1. If the assumption of the classical linear regressionmodel--that ther errors o
r disturbanes uenteringinto the population regression (PRF) are random or uncorr
elated- is violated, the problem of serial or autocorrelation arises.
The remedy depends on the nature of
Even if we use an AR(1) scheme, the coefficient of auto-correlation is not known
a priori.

Das könnte Ihnen auch gefallen