Sie sind auf Seite 1von 3

Chapter 9 Modelling

Model is mathematical representation of real world phenomenon, reality


with the degree of accuracy required depending on the application

Normative Approach to Modelling


Modelling in the physical and social sciences
o Normative means a theory of how something should be done.
Social sciences address the questions of human behavior,
controlled experiments are essential to physical sciences which
are difficult to do in social science, and it has implicit conviction
that models are capable of describing reality very accurately.
o Statistical inference means diligent experiment and study to
refute or validate any assumptions.
Exploratory Data Analysis
o Starting point to collect and do analysis of data. Exploratory data
analysis looks for pattern, often done graphically. This is to find a
proper formula or model.
o 3 reasons for choosing a model in a form: statistical evidence, a
matter of common sense, have successfully used the data
before.
Model Calibration
o It means estimating the uncertain parameters in a model
o Usually done by Maximum Likelihood Estimation (frequentist or
classical) or by Bayesian Approach.
o Monte Carlo Markov Chain has been a recent breakthrough
o Our ability to estimate the data is limited by the noise in the
data.
Fit to evidence
o Whether this model is correct or useful
o Do back-testing, that is to check whether the model mean and
variances are comparable to the data used
o Do out of sample or blend test valuation, that is to set aside
recent data during calibration process and use it for testing
o Generate future projections and compare with data.
o Longitudinal analysis follows the progress of one simulated path
over time; cross sectional analysis looks at a fixed future time
horizon across many simulated outcomes.
Hypothesis Testing
o A statistical tool to check for fit
Test Results support Test Results support
H0 H1
H0 is Good Outcome = 1- Type II Error = a
true a
H1 is Type I Error = b Good Outcome = 1
true b
o Most powerful test is given a value of a, one has the smallest
possible value for b
Parsimony
o Adding more parameters might improve fit, but this violate the
principle of parsimony -> using only as many parameters as
necessary to fit a model
o When you use too many parameters you are simply modelling
the samples to perfection, this idea is incorrect because the
parameters should represent the population.
o To account for this, penalty based on AIC and BIC.
Fit to theory
o Chosen model structure could be justified by reference to theory,
instead of rigorously testing all of them formally.
o Some models can be self-referential, that is the use of models
could actually change the behavior of the phenomenon being
modeled.
Computer technology development
o Translating formula into programming codes is different thing,
and commonly exposed to mistakes.
o Here are some proactive way to look for errors:
Numerical checks to validate algebra, sense check model
outputs (Explainable movements), re test software
functionality, documents the tests performed to avoid
repeated testing effort, mistakes may not be down to your
codes
o Many started coding to early on unstructured basis, condemned
to months of tracking down and fixing bugs. To deal with this
situation we have defensive programming.
Using models for projections
o Statistical textbook often emphasizes the value of accompanying
every estimate with a measure of its uncertainty
o There may be error when back testing and it falls beyond the CI.
These errors may be caused by: process error, parameter error,
model error, calibration error, survivorship bias, and operational
errors.
Bootstrapping
o Labor intensive but powerful test.
Computational model classification
o Deterministic
Applies a set of formula without introducing any
randomness
A probabilistic setting has been transformed into a
deterministic model by replacing all random items by their
expected value
A model office is a model of the financial statements and
cash flows of an insurance operation
o Stochastic
A model with explicit reference to different probabilities
Limitations on Normative Approach
Practical difficulties
o There are many interpretations of normative, and what was
chosen may just be one of many.
o Statistical inference might show support to a small or poorly
understood data sets, which might have just been the only
available source.
o A better model might be omitted due to its little impact on results
Theoretical ambiguities
o When comparing models in hypothesis test, you could simply
compare with an implausible model to simply validate your
choice
o Within a 95% CI, testing for 20 models will leave 1 model being
the better of existing, quickly overlooking the 19 already
compared.
o It requires skills and judgment to construct a proper hypothesis
test
o Type III error of having to pick one of the neither correct HO nor
H1

Expecting the unexpected


o To stress test against past scenarios, this is difficult when you
dont have the experience, and may be overstated if the event is
in the data. Techniques for estimating such tail event:
Exposure based analysis looks at the drivers of potential
losses even when no claims have been made
An analysis of extreme but painful outcomes may make use
of data on near misses
To extrapolate data with rare events

Commercial modelling
The role of modelling within the actuarial control cycle
Costs of models and data
Robustness
Governance and control
Models for advocacy
Models and markets
Disclosure

Das könnte Ihnen auch gefallen