Sie sind auf Seite 1von 63

Computational finance

Computational finance

Computational finance or financial engineering is a cross-disciplinary field which


relies on mathematical finance, numerical methods, computational intelligence
and computer simulations to make trading, hedging and investment decisions, as
well as facilitating the risk management of those decisions. Utilizing various
methods, practitioners of computational finance aim to precisely determine the
financial risk that certain financial instruments create.

1) Mathematical finance
2) Numerical methods
3) Computer simulations
4) Computational intelligence
5) Financial risk

History

Generally, individuals who fill positions in computational finance are known as


“quants”, referring to the quantitative skills necessary to perform the job.
Specifically, knowledge of the C++ programming language, as well as of the
mathematical subfields of: stochastic calculus, multivariate calculus, linear
algebra, differential equations, probability theory and statistical inference are
often entry level requisites for such a position. C++ has become the dominant
language for two main reasons: the computationally intensive nature of many
algorithms, and the focus on libraries rather than applications.

Computational finance was traditionally populated by Ph.Ds in finance, physics


and mathematics who moved into the field from more pure, academic
backgrounds (either directly from graduate school, or after teaching or research).
However, as the actual use of computers has become essential to rapidly carrying
out computational finance decisions, a background in computer programming has
become useful, and hence many computer programmers enter the field either from
Ph.D. programs or from other fields of software engineering. In recent years,
advanced computational methods, such as neural network and evolutionary
computation have opened new doors in computational finance. Practitioners of
computational finance have come from the fields of signal processing and
computational fluid dynamics and artificial intelligence.

Today, all full service institutional finance firms employ computational finance
professionals in their banking and finance operations (as opposed to being
ancillary information technology specialists), while there are many other boutique
firms ranging from 20 or fewer employees to several thousand that specialize in
quantitative trading alone. JPMorgan Chase & Co. was one of the first firms to
create a large derivatives business and employ computational finance (including

THE UNIVERSITY OF LAHORE 1/63


Computational finance

through the formation of RiskMetrics), while D. E. Shaw & Co. is probably the
oldest and largest quant fund (Citadel Investment Group is a major rival).

Introduction

One of the most common applications of computational finance is within the


arena of investment banking. Because of the sheer amount of funds involved in
this type of situation, computational finance comes to the fore as one of the tools
used to evaluate every potential investment, whether it be something as simple as
a new start-up company or a well established fund. Computational finance can
help prevent the investment of large amounts of funding in something that simply
does not appear to have much of a future.

Another area where computational finance comes into play is the world of
financial risk management. Stockbrokers, stockholders, and anyone who chooses
to invest in any type of investment can benefit from using the basic principles of
computational finance as a way of managing an individual portfolio. Running the
numbers for individual investors, just alike for larger concerns, can often make it
clear what risks are associated with any given investment opportunity. The result
can often be an individual who is able to sidestep a bad opportunity, and live to
invest another day in something that will be worthwhile in the long run.

In the business world, the use of computational finance can often come into play
when the time to engage in some form of corporate strategic planning arrives. For
instance, reorganizing the operating structure of a company in order to maximize
profits may look very good at first glance, but running the data through a process
of computational finance may in fact uncover some drawbacks to the current plan
that were not readily visible before.

Being aware of the complete and true expenses associated with the restructure
may prove to be more costly than anticipated, and in the long run not as
productive as was originally hoped. Computational finance can help get past the
hype and provide some realistic views of what could happen, before any corporate
strategy is implemented.

THE UNIVERSITY OF LAHORE 2/63


Computational finance

Quantitative analysis

A quantitative analysis is working in finance using numerical or quantitative


techniques. Similar work is done in most other modern industries, the work is
called quantitative analysis. In the investment industry, people who perform
quantitative analysis are frequently called quants.

Although the original quants were concerned with risk management and
derivatives pricing, the meaning of the term has expanded over time to include
those individuals involved in almost any application of mathematics in finance.
An example is statistical arbitrage.

History

Quantitative finance started in the U.S. in the 1930s as some astute investors
began using mathematical formulae to price stocks and bonds.

Robert C. Merton, a pioneer of quantitative analysis, introduced stochastic


calculus into the study of finance.

Harry Markowitz's 1952 Ph.D thesis "Portfolio Selection" was one of the first
papers to formally adapt mathematical concepts to finance. Markowitz formalized
a notion of mean return and covariances for common stocks which allowed him to
quantify the concept of "diversification" in a market. He showed how to compute
the mean return and variance for a given portfolio and argued that investors
should hold only those portfolios whose variance is minimal among all portfolios
with a given mean return. Although the language of finance now involves Itō
calculus, minimization of risk in a quantifiable manner underlies much of the
modern theory.

In 1969 Robert Merton introduced stochastic calculus into the study of finance.
Merton was motivated by the desire to understand how prices are set in financial
markets, which is the classical economics question of "equilibrium," and in later
papers he used the machinery of stochastic calculus to begin investigation of this
issue.

At the same time as Merton's work and with Merton's assistance, Fischer Black
and Myron Scholes were developing their option pricing formula, which led to
winning the 1997 Nobel Prize in Economics. It provided a solution for a practical
problem, that of finding a fair price for a European call option, i.e., the right to
buy one share of a given stock at a specified price and time. Such options are
frequently purchased by investors as a risk-hedging device. In 1981, Harrison and
Pliska used the general theory of continuous-time stochastic processes to put the
Black-Scholes option pricing formula on a solid theoretical basis, and as a result,
showed how to price numerous other "derivative" securities.

THE UNIVERSITY OF LAHORE 3/63


Computational finance

Quantitative and Computational Finance

‘Quantitative Finance’ as a branch of modern finance is one of the fastest growing


areas within the corporate world. Together with the sophistication and complexity
of modern financial products, this exciting discipline continues to act as the
motivating factor for new mathematical models and the subsequent development
of associated computational schemes. Alternative names for this subject area are
Mathematical Finance, Financial Mathematics or Financial Engineering.

This is a course in the applied aspects of mathematical finance, in particular


derivative pricing. The necessary understanding of products and markets required
will be covered during the course. The overall theme of the course is to develop
the Partial Differential Equation (PDE) approach to the pricing of options. As well
as a two hour examination during the summer term, students will undertake a
short computing project where they will use numerical and computational
techniques to perform derivative pricing.

Brief introduction to Stochastic Differential Equations (SDEs) –


Simulation
drift, diffusion, Itô’s Lemma. The statistics of random number
Methods in
generation in Excel. Simulating asset price SDEs in Excel.
Finance

Financial Introduction to the financial markets and the products which are
Products and traded in them: Equities, indices, foreign exchange, fixed
Markets income world and commodities. Options contracts and
strategies for speculation and hedging.

Similarity reduction and fundamental solution for the heat


equation. Black-Scholes PDE: simple European calls and puts;
Black-Scholes put-call parity. The PDE for pricing commodity and currency
framework options. Discontinuous payoffs – Binary and Digital options.
The greeks: theta, delta, gamma, vega & rho and their role in
hedging.

Solving the pricing PDEs numerically using Explicit, Implicit


Computational
and Crank-Nicholson Finite Difference Schemes. Stability
Finance
criteria. Monte Carlo Technique for derivative pricing.
Fixed-Income
Products Introduction to the properties and features of fixed income
products; yield, duration & convexity. Stochastic interest rate
models: stochastic differential equation for the spot interest rate;
bond pricing PDE; popular models for the spot rate (Vasicek,
CIR and Hull & White); solutions of the bond pricing equation;

THE UNIVERSITY OF LAHORE 4/63


Computational finance

Fixed Income

The fixed-income market demands a vast selection of investment options with a


variety of credit quality, maturities, and yields to meet investors' objectives. To
accomplish this, fixed income groups frequently create and modify mathematical
models to calculate bond pricing, perform yield analysis, calculate cash flows, and
develop hedging strategies.

Fixed-income research groups use the thousands of prewritten math and graphics
functions in MathWorks products to access bond data, perform statistical analysis,
calculate spreads, determine bond and derivative pricing, perform sensitivity
analyses, and run Monte Carlo simulations.

Advanced graphics and rendering capabilities in MATLAB make reviewing cash


flows, visualizing decision trees, plotting spot and forward curves, and creating
deployable interactive 2- and 3-D models easy.

Equity

Smart security investing requires in-depth research and analysis. Measuring all
the influencing factors is an essential part of risk management. As a result,
research groups continually create and modify mathematical models to calculate
stock value, review forecasts, and develop innovative risk strategies.

Equity research groups use the thousands of math and graphics functions in
MathWorks products to access stock data, perform statistical analysis, determine
derivatives pricing, perform sensitivity analyses, and run Monte Carlo
simulations. The graphics capabilities in MATLAB offer a variety of ways to
review time series data, visualize portfolio risks and returns, and create
forecasting graphs.

Investment Management and Trading

To meet the investment needs of individuals, institutions, and governments,


investment firms need to deliver a wide range of investment opportunities with
risk-adjusted performance and consistent returns over time. To accomplish this,
financial professionals need to develop and use mathematical models to optimize
portfolios and develop trading strategies and systems that can respond to market
conditions.

Investment management and trading research groups use the thousands of math
and graphics functions in MathWorks products to easily access securities data,

THE UNIVERSITY OF LAHORE 5/63


Computational finance

perform statistical analysis, determine pricing, conduct sensitivity and principal


component analyses, and implement buy and sell criteria. The graphics
capabilities in MATLAB offer a variety of ways to easily review time series data,
visualize portfolio risks and returns, and create forecasting graphs. With
MathWorks deployment tools, you can easily compile and integrate your
MATLAB algorithms into your system.

Mathematical and statistical approaches

According to Fund of Funds analyst Fred Gehm, "There are two types of
quantitative analysis and, therefore, two types of quants. One type works
primarily with mathematical models and the other primarily with statistical
models. While there is no logical reason why one person can't do both kinds of
work, this doesn’t seem to happen, perhaps because these types demand different
skill sets and, much more important, different psychologies."

A typical problem for a numerically oriented quantitative analyst would be to


develop a model for pricing and managing a complex derivative product.

A typical problem for statistically oriented quantitative analyst would be to


develop a model for deciding which stocks are relatively expensive and which
stocks are relatively cheap. The model might include a company's book value to
price ratio, its trailing earnings to price ratio and other accounting factors. An
investment manager might implement this analysis by buying the underpriced
stocks, selling the overpriced stocks or both.

One of the principal mathematical tools of quantitative finance is stochastic


calculus.

According to a July 2008 Aite Group report, today quants often use alpha
generation platforms to help them develop financial models. These software
solutions enable quants to centralize and streamline the alpha generation process.

THE UNIVERSITY OF LAHORE 6/63


Computational finance

Areas of application

Areas where computational finance techniques are employed include:

• Investment banking
• Forecasting
• Risk Management software
• Corporate strategic planning
• Securities trading and financial risk management
• Derivatives trading and risk management
• Investment management
• Pension scheme
• Insurance policy
• Mortgage agreement
• Lottery design
• Islamic banking
• Currency peg
• Gold and commodity valuation
• Collateralised debt obligation
• Credit default swap
• Bargaining
• Market mechanism design

THE UNIVERSITY OF LAHORE 7/63


Computational finance

Classification of method

Computational finance or financial engineering is a cross-disciplinary field which


relies on mathematical finance, numerical methods, computational intelligence
and computer simulations to make trading, hedging and investment decisions, as
well as facilitating the risk management of those decisions. Utilizing various
methods, practitioners of computational finance aim to precisely determine the
financial risk that certain financial instruments create.

1) Mathematical finance
2) Numerical methods
3) Computer simulations
4) Computational intelligence
5) Financial risk

Mathematical finance

Mathematical finance comprises the branches of applied mathematics concerned


with the financial markets.

The subject has a close relationship with the discipline of financial economics,
which is concerned with much of the underlying theory. Generally, mathematical
finance will derive, and extend, the mathematical or numerical models suggested
by financial economics. Thus, for example, while a financial economist might
study the structural reasons why a company may have a certain share price, a
financial mathematician may take the share price as a given, and attempt to use
stochastic calculus to obtain the fair value of derivatives of the stock (see:
Valuation of options).

In terms of practice, mathematical finance also overlaps heavily with the field of
computational finance (also known as financial engineering). Arguably, these are
largely synonymous, although the latter focuses on application, while the former
focuses on modeling and derivation (see: Quantitative analyst).

The fundamental theorem of arbitrage-free pricing is one of the key theorems in


mathematical finance.

History

The history of mathematical finance starts with the theory of portfolio


optimization by Harold Markowitz on using mean-variance estimates of portfolios
to judge investment strategies, causing a shift away from the concept of trying to
identify the best individual stock for investment. Using a linear regression

THE UNIVERSITY OF LAHORE 8/63


Computational finance

strategy to understand and quantify the risk (i.e. variance) and return (i.e. mean)
of an entire portfolio of stocks and bonds, an optimization strategy was used to
choose a portfolio with largest mean return subject to acceptable levels of
variance in the return. Simultaneously, William Sharpe developed the
mathematics of determining the correlation between each stock and the market.
For their pioneering work, Markowitz and Sharpe, along with Merton Miller,
shared the 1990 Nobel Prize in economics, for the first time ever awarded for a
work in finance.

The portfolio-selection work of Markowitz and Sharpe introduced mathematics to


the “black art” of investment management. With time, the mathematics has
become more sophisticated. Thanks to Robert Merton and Paul Samuelson, one-
period models were replaced by continuous time, Brownian-motion models, and
the quadratic utility function implicit in mean–variance optimization was replaced
by more general increasing, concave utility functions.

INTRODUCTION:

The mathematical formulation of problems arising in science, engineering,


economics and finance involving rate of change w.r.t. one independent variable is
governed by ordinary differential equations. Solutions of “real life” problems
often require developing and applying numerical/computational techniques to
model complex physical situations, which otherwise are not possible to solve by
analytical means. The choice of such a technique depends on how accurate a
solution is required, posing a multitude of several other factors including
computing time constraint and stability of a method. Once a suitable numerical
technique has been applied and the problem is transformed into an algorithmic
form, one can use the powerful computational facilities available.

GOALS:

• Develop / derive techniques to solve an initial value problem ( IVP) or


boundary value problem (BVP ) and calculate the associated local/global
truncation errors.

• Identify and apply an economical and efficient method to get a numerical


solution of an ODE or a system of ODEs.

• Examine the stability of various numerical schemes.

• Compare numerical solutions of some easy differential equations with their


analytical counterparts.

• Gain experience of ODE solvers through MATLAB.

THE UNIVERSITY OF LAHORE 9/63


Computational finance

Applied mathematics

Applied mathematics is a branch of mathematics that concerns itself with the


mathematical techniques typically used in the application of mathematical
knowledge to other domains.

Divisions of applied mathematics

There is no consensus of what the various branches of applied mathematics are.


Such categorizations are made difficult by the way mathematics and science
change over time, and also by the way universities organize departments, courses,
and degrees.

Historically, applied mathematics consisted principally of applied analysis, most


notably differential equations, approximation theory (broadly construed, to
include representations, asymptotic methods, variational methods, and numerical
analysis), and applied probability. These areas of mathematics were intimately
tied to the development of Newtonian Physics, and in fact the distinction between
mathematicians and physicists was not sharply drawn before the mid-19th
century. This history left a legacy as well; until the early 20th century subjects
such as classical mechanics were often taught in applied mathematics departments
at American universities rather than in physics departments, and fluid mechanics
may still be taught in applied mathematics departments.

Today, the term applied mathematics is used in a broader sense. It includes the
classical areas above, as well as other areas that have become increasingly
important in applications. Even fields such as number theory that are part of pure
mathematics are now important in applications (such as cryptology), though they
are not generally considered to be part of the field of applied mathematics per se.
Sometimes the term applicable mathematics is used to distinguish between the
traditional field of applied mathematics and the many more areas of mathematics
that are applicable to real-world problems.

Mathematicians distinguish between applied mathematics, which is concerned


with mathematical methods, and the applications of mathematics within science
and engineering. A biologist using a population model and applying known
mathematics would not be doing applied mathematics, but rather using it.
However, nonmathematicians do not usually draw this distinction. The use of
mathematics to solve industrial problems is called industrial mathematics.
Industrial mathematics is sometimes split in two branches: techno-mathematics
(covering problems coming from technology) and econo-mathematics (for
problems in economy and finance).

THE UNIVERSITY OF LAHORE 10/63


Computational finance

The success of modern numerical mathematical methods and software has led to
the emergence of computational mathematics, computational science, and
computational engineering, which use high performance computing for the
simulation of phenomena and solution of problems in the sciences and
engineering. These are often considered interdisciplinary programs.

Utility of applied mathematics

Historically, mathematics was most important in the natural sciences and


engineering. However, after World War II, fields outside of the physical sciences
have spawned the creation of new areas of mathematics, such as game theory,
which grew out of economic considerations, or neural networks, which arose out
of the study of the brain in neuroscience, or bioinformatics, from the importance
of analyzing large data sets in biology.

The advent of the computer has created new applications, both in studying and
using the new computer technology itself (computer science, which uses
combinatorics, formal logic, and lattice theory), as well as using computers to
study problems arising in other areas of science (computational science), and of
course studying the mathematics of computation (numerical analysis). Statistics is
probably the most widespread application of mathematics in the social sciences,
but other areas of mathematics are proving increasingly useful in these
disciplines, especially in economics and management science.

Other mathematical sciences (associated with applied


mathematics)

Applied mathematics is closely related to other mathematical sciences.

Scientific computing

Scientific computing includes applied mathematics (especially numerical


analysis), computing science (especially high-performance computing, and
mathematical modelling in a scientific discipline.

Computer Science

Computer science relies on logic and combinatorics.

Operations research and management science

Operations research and management science are often taught in faculties of


engineering, business, public policy.

THE UNIVERSITY OF LAHORE 11/63


Computational finance

Statistics

Applied mathematics has substantial overlap with the discipline of statistics.


Statistical theorists study and improve statistical procedures with
mathematics, and statistical research often raises mathematical questions.
Statistical theory relies on probability and decision theory, and makes
extensive use of scientific computing, analysis, and optimization; for the
design of experiments, statisticians use algebra and combinatorics. Applied
mathematicians and statisticians often work in a department of mathematical
sciences (particularly at colleges and small universities).

Statisticians have long complained that many mathematics departments have


assigned mathematicians (without statistical competence) to teach statistics
courses, effectively giving "double blind" courses. Examing data from 2000,
Schaeffer and Stasny reported

By far the majority of instructors within statistics departments have at least a


master’s degree in statistics or biostatistics (about 89% for doctoral
departments and about 79% for master’s departments). In doctoral
mathematics departments, however, only about 58% of statistics course
instructors had at least a master’s degree in statistics or biostatistics as their
highest degree earned. In master’s-level mathematics departments, the
corresponding percentage was near 44%, and in bachelor’s-level departments
only 19% of statistics course instructors had at least a master’s degree in
statistics or biostatistics as their highest degree earned. As we expected, a
large majority of instructors in statistics departments (83% for doctoral
departments and 62% for master’s departments) held doctoral degrees in
either statistics or biostatistics. The comparable percentages for instructors of
statistics in mathematics departments were about 52% and 38%.

This unprofessional conduct violates the "Statement on Professional Ethics"


of the American Association of University Professors (which has been
affirmed by many colleges and universities in the USA) and the ethical codes
of the International Statistical Institute and the American Statistical
Association. The principle that statistics-instructors should have statistical
competence has been affirmed by the guidelines of the Mathematical
Association of America, which has been endorsed by the American Statistical
Association.

Actuarial science

Actuarial science uses probability, statistics, and economic theory.

THE UNIVERSITY OF LAHORE 12/63


Computational finance

Other disciplines

The line between applied mathematics and specific areas of application is


often blurred. Many universities teach mathematical and statistical courses
outside of the respective departments, in departments and areas including
business and economics, engineering, physics, psychology, biology, computer
science, and mathematical physics.

Mathematical tools

• Asymptotic analysis
• Calculus
• Copulas
• Differential equation
• Ergodic theory
• Gaussian copulas
• Numerical analysis
• Real analysis
• Probability
• Probability distribution
o Binomial distribution
o Log-normal distribution
• Expected value
• Value at risk
• Risk-neutral measure
• Stochastic calculus
o Brownian motion
o Lévy process
• Itô's lemma
• Fourier transform
• Girsanov's theorem
• Radon-Nikodym derivative
• Monte Carlo method
• Quantile function
• Partial differential equations
o Heat equation
• Martingale representation theorem
• Feynman Kac Formula
• Stochastic differential equations
• Volatility
o ARCH model
o GARCH model
• Stochastic volatility
• Mathematical model
• Numerical method

THE UNIVERSITY OF LAHORE 13/63


Computational finance

o Numerical partial differential equations


 Crank-Nicolson method
 Finite difference method

Derivatives pricing

• The Brownian Motion Model of Financial Markets


• Rational pricing assumptions
o Risk neutral valuation
o Arbitrage-free pricing
• Futures
o Futures contract pricing
• Options
o Put–call parity (Arbitrage relationships for options)
o Intrinsic value, Time value
o Moneyness
o Pricing models
 Black–Scholes model
 Black model
 Binomial options model
 Monte Carlo option model
 Implied volatility, Volatility smile
 SABR Volatility Model
 Markov Switching Multifractal
 The Greeks
o Optimal stopping (Pricing of American options)
• Interest rate derivatives
o Short rate model
 Hull-White model
 Cox-Ingersoll-Ross model
 Chen model
o LIBOR Market Model
o Heath-Jarrow-Morton framework

THE UNIVERSITY OF LAHORE 14/63


Computational finance

Areas of application

• Computational finance
• Quantitative Behavioral Finance
• Derivative (finance), list of derivatives topics
• Modeling and analysis of financial markets
• International Swaps and Derivatives Association
• Fundamental financial concepts - topics
• Model (economics)
• List of finance topics
• List of economics topics, List of economists
• List of accounting topics
• Statistical Finance

THE UNIVERSITY OF LAHORE 15/63


Computational finance

Numerical analysis

Numerical analysis is the study of algorithms for the problems of continuous


mathematics (as distinguished from discrete mathematics).

One of the earliest mathematical writings is the Babylonian tablet YBC 7289,
which gives a sexagesimal numerical approximation of , the length of the
diagonal in a unit square. Being able to compute the sides of a triangle (and
hence, being able to compute square roots) is extremely important, for instance, in
carpentry and construction. In a rectangular wall section that is 2.40 meter by 3.75
meter, a diagonal beam has to be 4.45 meters long.

Numerical analysis continues this long tradition of practical mathematical


calculations. Much like the Babylonian approximation to , modern numerical
analysis does not seek exact answers, because exact answers are impossible to
obtain in practice. Instead, much of numerical analysis is concerned with
obtaining approximate solutions while maintaining reasonable bounds on errors.

Numerical analysis naturally finds applications in all fields of engineering and the
physical sciences, but in the 21st century, the life sciences and even the arts have
adopted elements of scientific computations. Ordinary differential equations
appear in the movement of heavenly bodies (planets, stars and galaxies);
optimization occurs in portfolio management; numerical linear algebra is essential
to quantitative psychology; stochastic differential equations and Markov chains
are essential in simulating living cells for medicine and biology.

Before the advent of modern computers numerical methods often depended on


hand interpolation in large printed tables. Since the mid 20th century, computers
calculate the required functions instead. The interpolation algorithms nevertheless
may be used as part of the software for solving differential equations.

General introduction

The overall goal of the field of numerical analysis is the design and analysis of
techniques to give approximate but accurate solutions to hard problems, the
variety of which is suggested by the following.

• Advanced numerical methods are essential in making numerical weather


prediction feasible.
• Computing the trajectory of a spacecraft requires the accurate numerical
solution of a system of ordinary differential equations.
• Car companies can improve the crash safety of their vehicles by using
computer simulations of car crashes. Such simulations essentially consist
of solving partial differential equations numerically.

THE UNIVERSITY OF LAHORE 16/63


Computational finance

• Hedge funds (private investment funds) use tools from all fields of
numerical analysis to calculate the value of stocks and derivatives more
precisely than other market participants.
• Airlines use sophisticated optimization algorithms to decide ticket prices,
airplane and crew assignments and fuel needs. This field is also called
operations research.
• Insurance companies use numerical programs for actuarial analysis.

The rest of this section outlines several important themes of numerical


analysis.

History

The field of numerical analysis predates the invention of modern computers by


many centuries. Linear interpolation was already in use more than 2000 years ago.
Many great mathematicians of the past were preoccupied by numerical analysis,
as is obvious from the names of important algorithms like Newton's method,
Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.

To facilitate computations by hand, large books were produced with formulas and
tables of data such as interpolation points and function coefficients. Using these
tables, often calculated out to 16 decimal places or more for some functions, one
could look up values to plug into the formulas given and achieve very good
numerical estimates of some functions. The canonical work in the field is the
NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a
very large number of commonly used formulas and functions and their values at
many points. The function values are no longer very useful when a computer is
available, but the large listing of formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation.
These calculators evolved into electronic computers in the 1940s, and it was then
found that these computers were also useful for administrative purposes. But the
invention of the computer also influenced the field of numerical analysis, since
now longer and more complicated calculations could be done.

Direct and iterative methods

Direct vs. iterative methods

Consider the problem of solving

3x3+4=28

for the unknown quantity x.

THE UNIVERSITY OF LAHORE 17/63


Computational finance

Direct Method
3x3 + 4 = 28.
Subtract 4 3x3 = 24.
Divide by 3 x3 = 8.
Take cube roots x = 2.

SHOULD BE NOTED THAT THE BISECTION METHOD BELOW ISN'T


EXACTLY WHAT IS DESCRIBED ON THE BISECTION PAGE

For the iterative method, apply the bisection method to f(x) = 3x3 - 24. The initial
values are a = 0, b = 3, f(a) = -24, f(b) = 57.

Iterative Method
a b mid f(mid)
0 3 1.5 -13.875
1.5 3 2.25 10.17...
1.5 2.25 1.875 -4.22...
1.875 2.25 2.0625 2.32...

We conclude from this table that the solution is between 1.875 and 2.0625. The
algorithm might return any number in that range with an error less than 0.2.

Discretization and numerical integration

In a two hour race, we have measured the speed of the car at three instants and
recorded them in the following table.

Time 0:20 1:00 1:40

km/h 140 150 180

A discretization would be to say that the speed of the car was constant from 0:00
to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the
total distance traveled in the first 40 minutes is approximately (2/3h x 140
km/h)=93.3 km. This would allow us to estimate the total distance traveled as
93.3 km + 100 km + 120 km = 313.3 km, which is an example of numerical
integration (see below) using a Riemann sum, because displacement is the
integral of velocity.

Ill posed problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and
f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly
1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.

THE UNIVERSITY OF LAHORE 18/63


Computational finance

Well-posed problem: By contrast, the function is continuous and so evaluating it


is well-posed, at least for x being not close to zero.

Direct methods compute the solution to a problem in a finite number of steps.


These methods would give the precise answer if they were performed in infinite
precision arithmetic. Examples include Gaussian elimination, the QR
factorization method for solving systems of linear equations, and the simplex
method of linear programming. In practice, finite precision is used and the result
is an approximation of the true solution (assuming stability).

In contrast to direct methods, iterative methods are not expected to terminate in a


number of steps. Starting from an initial guess, iterative methods form successive
approximations that converge to the exact solution only in the limit. A
convergence criterion is specified in order to decide when a sufficiently accurate
solution has (hopefully) been found. Even using infinite precision arithmetic these
methods would not reach the solution within a finite number of steps (in general).
Examples include Newton's method, the bisection method, and Jacobi iteration. In
computational matrix algebra, iterative methods are generally needed for large
problems.

Iterative methods are more common than direct methods in numerical analysis.
Some methods are direct in principle but are usually used as though they were
not, e.g. GMRES and the conjugate gradient method. For these methods the
number of steps needed to obtain the exact solution is so large that an
approximation is accepted in the same manner as for an iterative method.

Discretization

Furthermore, continuous problems must sometimes be replaced by a discrete


problem whose solution is known to approximate that of the continuous problem;
this process is called discretization. For example, the solution of a differential
equation is a function. This function must be represented by a finite amount of
data, for instance by its value at a finite number of points at its domain, even
though this domain is a continuum.

The generation and propagation of errors

The study of errors forms an important part of numerical analysis. There are
several ways in which error can be introduced in the solution of the problem.

THE UNIVERSITY OF LAHORE 19/63


Computational finance

Round-off

Round-off errors arise because it is impossible to represent all real numbers


exactly on a finite-state machine (which is what all practical digital computers
are).

Truncation and discretization error

Truncation errors are committed when an iterative method is terminated and the
approximate solution differs from the exact solution. Similarly, discretization
induces a discretization error because the solution of the discrete problem does
not coincide with the solution of the continuous problem. For instance, in the
iteration in the sidebar to compute the solution of 3x3 + 4 = 28, after 10 or so
iterations, we conclude that the root is roughly 1.99 (for example). We therefore
have a truncation error of 0.01.

Once an error is generated, it will generally propagate through the calculation. For
instance, we have already noted that the operation + on a calculator (or a
computer) is inexact. It follows that a calculation of the type a+b+c+d+e is even
more inexact.

Numerical stability and well-posed problems

Numerical stability is an important notion in numerical analysis. An algorithm is


called numerically stable if an error, whatever its cause, does not grow to be much
larger during the calculation. This happens if the problem is well-conditioned,
meaning that the solution changes by only a small amount if the problem data are
changed by a small amount. To the contrary, if a problem is ill-conditioned, then
any small error in the data will grow to be a large error.

Both the original problem and the algorithm used to solve that problem can be
well-conditioned and/or ill-conditioned, and any combination is possible.

So an algorithm that solves a well-conditioned problem may be either numerically


stable or numerically unstable. An art of numerical analysis is to find a stable
algorithm for solving a well-posed mathematical problem. For instance,
computing the square root of 2 (which is roughly 1.41421) is a well-posed
problem. Many algorithms solve this problem by starting with an initial
approximation x1 to , for instance x1=1.4, and then computing improved guesses
x2, x3, etc... One such method is the famous Babylonian method, which is given
by xk+1 = xk/2 + 1/xk. Another iteration, which we will call Method X, is given
by xk + 1 = (xk2−2)2 + xk. We have calculated a few iterations of each scheme in
table form below, with initial guesses x1 = 1.4 and x1 = 1.42.

THE UNIVERSITY OF LAHORE 20/63


Computational finance

Babylonian Babylonian Method X Method X

x1 = 1.4 x1 = 1.42 x1 = 1.4 x1 = 1.42

x2 = 1.41... x2 = 1.41... x2 = 1.40 x2 = 1.42

x3 = 1.41… x3 = 1.41... x3 = 1.40... x3 = 1.42...

... ...

X1000000 = 1.41421... x28 = 7280.2284...

Observe that the Babylonian method converges fast regardless of the initial guess,
whereas Method X converges extremely slowly with initial guess 1.4 and diverges
for initial guess 1.42. Hence, the Babylonian method is numerically stable, while
Method X is numerically unstable.

Areas of study

The field of numerical analysis is divided into different disciplines according to


the problem that is to be solved.

Computing values of functions

Interpolation: We have observed the temperature to vary from 20 degrees


Celsius at 1:00 to 14 degrees at 3:00. A linear interpolation of this data would
conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm.

Extrapolation: If the gross domestic product of a country has been growing an


average of 5% per year and was 100 billion dollars last year, we might extrapolate
that it will be 105 billion dollars this year.

Regression: In linear regression, given n points, we compute a line that passes as


close as possible to those n points.

Optimization: Say you sell lemonade at a lemonade stand, and notice that at $1,
you can sell 197 glasses of lemonade per day, and that for each increase of $0.01,
you will sell one less lemonade per day. If you could charge $1.485, you would
maximize your profit, but due to the constraint of having to charge a whole cent
amount, charging $1.49 per glass will yield the maximum income of $220.52 per
day.

Differential equation: If you set up 100 fans to blow air from one end of the
room to the other and then you drop a feather into the wind, what happens? The
feather will follow the air currents, which may be very complex. One

THE UNIVERSITY OF LAHORE 21/63


Computational finance

approximation is to measure the speed at which the air is blowing near the feather
every second, and advance the simulated feather as if it were moving in a straight
line at that same speed for one second, before measuring the wind speed again.
This is called the Euler method for solving an ordinary differential equation.

One of the simplest problems is the evaluation of a function at a given point. The
most straightforward approach, of just plugging in the number in the formula is
sometimes not very efficient. For polynomials, a better approach is using the
Horner scheme, since it reduces the necessary number of multiplications and
additions. Generally, it is important to estimate and control round-off errors
arising from the use of floating point arithmetic.

Interpolation, extrapolation, and regression

Interpolation solves the following problem: given the value of some unknown
function at a number of points, what value does that function have at some other
point between the given points? A very simple method is to use linear
interpolation, which assumes that the unknown function is linear between every
pair of successive points. This can be generalized to polynomial interpolation,
which is sometimes more accurate but suffers from Runge's phenomenon. Other
interpolation methods use localized functions like splines or wavelets.

Extrapolation is very similar to interpolation, except that now we want to find


the value of the unknown function at a point which is outside the given points.

Regression is also similar, but it takes into account that the data is imprecise.
Given some points, and a measurement of the value of some function at these
points (with an error), we want to determine the unknown function. The least
squares-method is one popular way to achieve this.

Areas of application

• Scientific computing
• List of numerical analysis topics
• Gram-Schmidt process
• Numerical differentiation
• Symbolic-numeric computation

General

• numerical-methods.com
• numericalmathematics.com
• Numerical Recipes
• "Alternatives to Numerical Recipes"
• Scientific computing FAQ

THE UNIVERSITY OF LAHORE 22/63


Computational finance

• Numerical analysis DMOZ category


• Numerical Computing Resources on the Internet - maintained by Indiana
University Stat/Math Center
• Numerical Methods Resources

Software

Since the late twentieth century, most algorithms are implemented in a variety of
programming languages. The Netlib repository contains various collections of
software routines for numerical problems, mostly in Fortran and C. Commercial
products implementing many different numerical algorithms include the IMSL
and NAG libraries; a free alternative is the GNU Scientific Library.

There are several popular numerical computing applications such as MATLAB,


S-PLUS, LabVIEW, and IDL and Scilab as well as free and open source
alternatives such as FreeMat, GNU Octave (similar to Matlab), IT++ (a C++
library), R (similar to S-PLUS) and certain variants of Python. Performance
varies widely: while vector and matrix operations are usually fast, scalar loops
may vary in speed by more than an order of magnitude.

Many computer algebra systems such as Mathematica also benefit from the
availability of arbitrary precision arithmetic which can provide more accurate
results.

Also, any spreadsheet software can be used to solve simple problems relating to
numerical analysis.

THE UNIVERSITY OF LAHORE 23/63


Computational finance

Computational intelligence

Computational intelligence (CI) is an offshoot of artificial intelligence. As an


alternative to GOFAI it rather relies on heuristic algorithms such as in fuzzy
systems, neural networks and evolutionary computation. In addition,
computational intelligence also embraces techniques that use Swarm intelligence,
Fractals and Chaos Theory, Artificial immune systems, Wavelets, etc.

Computational intelligence combines elements of learning, adaptation, evolution


and Fuzzy logic (rough sets) to create programs that are, in some sense,
intelligent. Computational intelligence research does not reject statistical methods,
but often gives a complementary view (as is the case with fuzzy systems).
Artificial neural networks is a branch of computational intelligence that is closely
related to machine learning.

Computational intelligence is further closely associated with soft computing,


connectionist systems and cybernetics.

Artificial intelligence

Artificial Intelligence (AI) is the intelligence of machines and the branch of


computer science which aims to create it. Major AI textbooks define the field as
"the study and design of intelligent agents," where an intelligent agent is a system
that perceives its environment and takes actions which maximize its chances of
success. John McCarthy, who coined the term in 1956, defines it as "the science
and engineering of making intelligent machines."

The field was founded on the claim that a central property of human beings,
intelligence—the sapience of Homo sapiens—can be so precisely described that it
can be simulated by a machine. This raises philosophical issues about the nature
of the mind and limits of scientific hubris, issues which have been addressed by
myth, fiction and philosophy since antiquity. Artificial intelligence has been the
subject of breathtaking optimism, has suffered stunning setbacks and, today, has
become an essential part of the technology industry, providing the heavy lifting
for many of the most difficult problems in computer science.

AI research is highly technical and specialized, so much so that some critics decry
the "fragmentation" of the field. Subfields of AI are organized around particular
problems, the application of particular tools and around longstanding theoretical
differences of opinion. The central problems of AI include such traits as
reasoning, knowledge, planning, learning, communication, perception and the
ability to move and manipulate objects. General intelligence (or "strong AI") is
still a long-term goal of (some) research.

THE UNIVERSITY OF LAHORE 24/63


Computational finance

Perspectives on CI

CI in myth, fiction and speculation

Thinking machines and artificial beings appear in Greek myths, such as Talos of
Crete, the golden robots of Hephaestus and Pygmalion's Galatea. Human
likenesses believed to have intelligence were built in many ancient societies;
some of the earliest being the sacred statues worshipped in Egypt and Greece, and
including the machines of Yan Shi, Hero of Alexandria, Al-Jazari or Wolfgang
von Kempelen. It was widely believed that artificial beings had been created by
Geber, Judah Loew and Paracelsus. Stories of these creatures and their fates
discuss many of the same hopes, fears and ethical concerns that are presented by
artificial intelligence.

Mary Shelley's Frankenstein, considers a key issue in the ethics of artificial


intelligence: if a machine can be created that has intelligence, could it also feel? If
it can feel, does it have the same rights as a human being? The idea also appears
in modern science fiction: the film Artificial Intelligence: A.I. considers a
machine in the form of a small boy which has been given the ability to feel human
emotions, including, tragically, the capacity to suffer. This issue, now known as
"robot rights", is currently being considered by, for example, California's Institute
for the Future, although many critics believe that the discussion is premature.

Another issue explored by both science fiction writers and futurists is the impact
of artificial intelligence on society. In fiction, AI has appeared as a servant (R2D2
in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt.
Commander Data in Star Trek), a conqueror (The Matrix), a dictator (With Folded
Hands), an exterminator (Terminator, Battlestar Galactica), an extension to human
abilities (Ghost in the Shell) and the saviour of the human race (R. Daneel Olivaw
in the Foundation Series). Academic sources have considered such consequences
as: a decreased demand for human labor, the enhancement of human ability or
experience, and a need for redefinition of human identity and basic values.

Several futurists argue that artificial intelligence will transcend the limits of
progress and fundamentally transform humanity. Ray Kurzweil has used Moore's
law (which describes the relentless exponential improvement in digital technology
with uncanny accuracy) to calculate that desktop computers will have the same
processing power as human brains by the year 2029, and that by 2045 artificial
intelligence will reach a point where it is able to improve itself at a rate that far
exceeds anything conceivable in the past, a scenario that science fiction writer
Vernor Vinge named the "technological singularity". Edward Fredkin argues that
"artificial intelligence is the next stage in evolution," an idea first proposed by
Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by
George Dyson in his book of the same name in 1998. Several futurists and science

THE UNIVERSITY OF LAHORE 25/63


Computational finance

fiction writers have predicted that human beings and machines will merge in the
future into cyborgs that are more capable and powerful than either. This idea,
called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, is
now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick
and inventor Ray Kurzweil. Transhumanism has been illustrated in fiction as well,
for example in the manga Ghost in the Shell and the science fiction series Dune.
Pamela McCorduck writes that these scenarios are expressions of an ancient
human desire to, as she calls it, "forge the gods."

History of CI research

In the middle of the 20th century, a handful of scientists began a new approach to
building intelligent machines, based on recent discoveries in neurology, a new
mathematical theory of information, an understanding of control and stability
called cybernetics, and above all, by the invention of the digital computer, a
machine based on the abstract essence of mathematical reasoning.

The field of modern AI research was founded at a conference on the campus of


Dartmouth College in the summer of 1956. Those who attended would become
the leaders of AI research for many decades, especially John McCarthy, Marvin
Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT,
CMU and Stanford. They and their students wrote programs that were, to most
people, simply astonishing: computers were solving word problems in algebra,
proving logical theorems and speaking English. By the middle 60s their research
was heavily funded by the U.S. Department of Defense, and they were optimistic
about the future of the new field:

• 1965, H. A. Simon: "[M]achines will be capable, within twenty years, of


doing any work a man can do"
• 1967, Marvin Minsky: "Within a generation ... the problem of creating
'artificial intelligence' will substantially be solved."

These predictions, and many like them, would not come true. They had failed to
recognize the difficulty of some of the problems they faced. In 1974, in response
to the criticism of England's Sir James Lighthill and ongoing pressure from
Congress to fund more productive projects, the U.S. and British governments cut
off all undirected, exploratory research in AI. This was the first AI winter.

In the early 80s, AI research was revived by the commercial success of expert
systems, a form of AI program that simulated the knowledge and analytical skills
of one or more human experts. By 1985 the market for AI had reached more than
a billion dollars, and governments around the world poured money back into the
field. However, just a few years later, beginning with the collapse of the Lisp
Machine market in 1987, AI once again fell into disrepute, and a second, longer
lasting AI winter began.

THE UNIVERSITY OF LAHORE 26/63


Computational finance

In the 90s and early 21st century, AI achieved its greatest successes, albeit
somewhat behind the scenes. Artificial intelligence is used for logistics, data
mining, medical diagnosis and many other areas throughout the technology
industry. The success was due to several factors: the incredible power of
computers today (see Moore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields working on
similar problems, and above all a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.

Philosophy of AI

Artificial intelligence, by claiming to be able to recreate the capabilities of the


human mind, is both a challenge and an inspiration for philosophy. Are there
limits to how intelligent machines can be? Is there an essential difference between
human intelligence and artificial intelligence? Can a machine have a mind and
consciousness? A few of the most influential answers to these questions are given
below.

Turing's "polite convention"

If a machine acts as intelligently as a human being, then it is as intelligent as a


human being. Alan Turing theorized that, ultimately, we can only judge the
intelligence of a machine based on its behavior. This theory forms the basis of
the Turing test.

The Dartmouth proposal

"Every aspect of learning or any other feature of intelligence can be so


precisely described that a machine can be made to simulate it." This assertion
was printed in the proposal for the Dartmouth Conference of 1956, and
represents the position of most working AI researchers.

Newell and Simon's physical symbol system hypothesis

"A physical symbol system has the necessary and sufficient means of general
intelligent action." This statement claims that the essence of intelligence is
symbol manipulation. Hubert Dreyfus argued that, on the contrary, human
expertise depends on unconscious instinct rather than conscious symbol
manipulation and on having a "feel" for the situation rather than explicit
symbolic knowledge.

Gödel's incompleteness theorem

THE UNIVERSITY OF LAHORE 27/63


Computational finance

A formal system (such as a computer program) can not prove all true
statements. Roger Penrose is among those who claim that Gödel's theorem
limits what machines can do.

Searle's strong AI hypothesis

"The appropriately programmed computer with the right inputs and outputs
would thereby have a mind in exactly the same sense human beings have
minds." Searle counters this assertion with his Chinese room argument, which
asks us to look inside the computer and try to find where the "mind" might be.

The artificial brain argument

The brain can be simulated. Hans Moravec, Ray Kurzweil and others have
argued that it is technologically feasible to copy the brain directly into
hardware and software, and that such a simulation will be essentially identical
to the original.

CI research

In the 21st century, AI research has become highly specialized and technical. It is
deeply divided into subfields that often fail to communicate with each other.
Subfields have grown up around particular institutions, the work of particular
researchers, particular problems (listed below), long standing differences of
opinion about how AI should be done (listed as "approaches" below) and the
application of widely differing tools (see tools of AI, below).

Problems of AI

The problem of simulating (or creating) intelligence has been broken down into a
number of specific sub-problems. These consist of particular traits or capabilities
that researchers would like an intelligent system to display. The traits described
below have received the most attention.

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step


reasoning that human beings use when they solve puzzles, play board games or
make logical deductions. By the late 80s and 90s, AI research had also developed
highly successful methods for dealing with uncertain or incomplete information,
employing concepts from probability and economics.

THE UNIVERSITY OF LAHORE 28/63


Computational finance

For difficult problems, most of these algorithms can require enormous


computational resources — most experience a "combinatorial explosion": the
amount of memory or computer time required becomes astronomical when the
problem goes beyond a certain size. The search for more efficient problem solving
algorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgments rather
than the conscious, step-by-step deduction that early AI research was able to
model. AI has made some progress at imitating this kind of "sub-symbolic"
problem solving: embodied approaches emphasize the importance of sensorimotor
skills to higher reasoning; neural net research attempts to simulate the structures
inside human and animal brains that gives rise to this skill.

Knowledge representation

Knowledge representation and knowledge engineering are central to AI research.


Many of the problems machines are expected to solve will require extensive
knowledge about the world. Among the things that AI needs to represent are:
objects, properties, categories and relations between objects; situations, events,
states and time; causes and effects; knowledge about knowledge (what we know
about what other people know); and many other, less well researched domains. A
complete representation of "what exists" is an ontology (borrowing a word from
traditional philosophy), of which the most general are called upper ontologies.

Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem

Many of the things people know take the form of "working assumptions." For
example, if a bird comes up in conversation, people typically picture an
animal that is fist sized, sings, and flies. None of these things are true about all
birds. John McCarthy identified this problem in 1969 as the qualification
problem: for any commonsense rule that AI researchers care to represent,
there tend to be a huge number of exceptions. Almost nothing is simply true or
false in the way that abstract logic requires. AI research has explored a
number of solutions to this problem.

The breadth of commonsense knowledge

The number of atomic facts that the average person knows is astronomical.
Research projects that attempt to build a complete knowledge base of
commonsense knowledge (e.g., Cyc) require enormous amounts of laborious
ontological engineering — they must be built, by hand, one complicated
concept at a time.

The subsymbolic form of some commonsense knowledge

THE UNIVERSITY OF LAHORE 29/63


Computational finance

Much of what people know isn't represented as "facts" or "statements" that they
could actually say out loud. For example, a chess master will avoid a particular
chess position because it "feels too exposed" or an art critic can take one look at a
statue and instantly realize that it is a fake. These are intuitions or tendencies that
are represented in the brain non-consciously and sub-symbolically. Knowledge
like this informs, supports and provides a context for symbolic, conscious
knowledge. As with the related problem of sub-symbolic reasoning, it is hoped
that situated AI or computational intelligence will provide ways to represent this
kind of knowledge.

Planning

Intelligent agents must be able to set goals and achieve them. They need a way to
visualize the future (they must have a representation of the state of the world and
be able to make predictions about how their actions will change it) and be able to
make choices that maximize the utility (or "value") of the available choices.

In some planning problems, the agent can assume that it is the only thing acting
on the world and it can be certain what the consequences of its actions may be.
However, if this is not true, it must periodically check if the world matches its
predictions and it must change its plan as this becomes necessary, requiring the
agent to reason under uncertainty.

Multi-agent planning uses the cooperation and competition of many agents to


achieve a given goal. Emergent behavior such as this is used by evolutionary
algorithms and swarm intelligence.

Learning

Machine learning has been central to AI research from the beginning.


Unsupervised learning is the ability to find patterns in a stream of input.
Supervised learning includes both classification (be able to determine what
category something belongs in, after seeing a number of examples of things from
several categories) and regression (given a set of numerical input/output
examples, discover a continuous function that would generate the outputs from
the inputs). In reinforcement learning the agent is rewarded for good responses
and punished for bad ones. These can be analyzed in terms of decision theory,
using concepts like utility. The mathematical analysis of machine learning
algorithms and their performance is a branch of theoretical computer science
known as computational learning theory.

Natural language processing

Natural language processing gives machines the ability to read and understand the
languages that the human beings speak. Many researchers hope that a sufficiently
powerful natural language processing system would be able to acquire knowledge

THE UNIVERSITY OF LAHORE 30/63


Computational finance

on its own, by reading the existing text available over the internet. Some
straightforward applications of natural language processing include information
retrieval (or text mining) and machine translation.

Motion and manipulation

The field of robotics is closely related to AI. Intelligence is required for robots to
be able to handle such tasks as object manipulation and navigation, with sub-
problems of localization (knowing where you are), mapping (learning what is
around you) and motion planning (figuring out how to get there).

Perception

Machine perception is the ability to use input from sensors (such as cameras,
microphones, sonar and others more exotic) to deduce aspects of the world.
Computer vision is the ability to analyze visual input. A few selected subproblems
are speech recognition, facial recognition and object recognition.

Social intelligence

Emotion and social skills play two roles for an intelligent agent:

• It must be able to predict the actions of others, by understanding their


motives and emotional states. (This involves elements of game theory,
decision theory, as well as the ability to model human emotions and the
perceptual skills to detect emotions.)
• For good human-computer interaction, an intelligent machine also needs
to display emotions — at the very least it must appear polite and sensitive
to the humans it interacts with. At best, it should have normal emotions
itself.

Creativity

A sub-field of AI addresses creativity both theoretically (from a philosophical and


psychological perspective) and practically (via specific implementations of
systems that generate outputs that can be considered creative).

General intelligence

Most researchers hope that their work will eventually be incorporated into a
machine with general intelligence (known as strong AI), combining all the skills
above and exceeding human abilities at most or all of them. A few believe that
anthropomorphic features like artificial consciousness or an artificial brain may be
required for such a project.

THE UNIVERSITY OF LAHORE 31/63


Computational finance

Many of the problems above are considered AI-complete: to solve one problem,
you must solve them all. For example, even a straightforward, specific task like
machine translation requires that the machine follow the author's argument
(reason), know what it's talking about (knowledge), and faithfully reproduce the
author's intention (social intelligence). Machine translation, therefore, is believed
to be AI-complete: it may require strong AI to be done as well as humans can do
it.

Approaches to CI

There is no established unifying theory or paradigm that guides AI research.


Researchers disagree about many issues. A few of the most long standing
questions that have remained unanswered are these: Can intelligence be
reproduced using high-level symbols, similar to words and ideas? Or does it
require "sub-symbolic" processing? Should artificial intelligence simulate natural
intelligence, by studying human psychology or animal neurobiology? Or is human
biology as irrelevant to AI research as bird biology is to aeronautical engineering?
Can intelligent behavior be described using simple, elegant principles (such as
logic or optimization)? Or does artificial intelligence necessarily require solving
many unrelated problems

Cybernetics and brain simulation

The human brain provides inspiration for artificial intelligence researchers,


however there is no consensus on how closely it should be simulated.

In the 1940s and 1950s, a number of researchers explored the connection between
neurology, information theory, and cybernetics. Some of them built machines that
used electronic networks to exhibit rudimentary intelligence, such as W. Grey
Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered
for meetings of the Teleological Society at Princeton University and the Ratio
Club in England. By 1960, this approach was largely abandoned, although
elements of it would be revived in the 1980s.

Traditional symbolic AI

When access to digital computers became possible in the middle 1950s, AI


research began to explore the possibility that human intelligence could be reduced
to symbol manipulation. The research was centered in three institutions: CMU,
Stanford and MIT, and each one developed its own style of research. John
Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".

Cognitive simulation

Economist Herbert Simon and Alan Newell studied human problem solving
skills and attempted to formalize them, and their work laid the foundations of

THE UNIVERSITY OF LAHORE 32/63


Computational finance

the field of artificial intelligence, as well as cognitive science, operations


research and management science. Their research team performed
psychological experiments to demonstrate the similarities between human
problem solving and the programs (such as their "General Problem Solver")
they were developing. This tradition, centered at Carnegie Mellon University
would eventually culminate in the development of the Soar architecture in the
middle 80s.

Logical AI

Unlike Newell and Simon, John McCarthy felt that machines did not need to
simulate human thought, but should instead try to find the essence of abstract
reasoning and problem solving, regardless of whether people used the same
algorithms. His laboratory at Stanford (SAIL) focused on using formal logic
to solve a wide variety of problems, including knowledge representation,
planning and learning. Logic was also focus of the work at the University of
Edinburgh and elsewhere in Europe which led to the development of the
programming language Prolog and the science of logic programming.

"Scruffy" symbolic AI

Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that
solving difficult problems in vision and natural language processing required
ad-hoc solutions – they argued that there was no simple and general principle
(like logic) that would capture all the aspects of intelligent behavior. Roger
Schank described their "anti-logic" approaches as "scruffy" (as opposed to the
"neat" paradigms at CMU and Stanford). Commonsense knowledge bases
(such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be
built by hand, one complicated concept at a time.

Knowledge based AI

When computers with large memories became available around 1970,


researchers from all three traditions began to build knowledge into AI
applications. This "knowledge revolution" led to the development and
deployment of expert systems (introduced by Edward Feigenbaum), the first
truly successful form of AI software. The knowledge revolution was also
driven by the realization that enormous amounts of knowledge would be
required by many simple AI applications.

Sub-symbolic AI

During the 1960s, symbolic approaches had achieved great success at simulating
high-level thinking in small demonstration programs. Approaches based on

THE UNIVERSITY OF LAHORE 33/63


Computational finance

cybernetics or neural networks were abandoned or pushed into the background.


By the 1980s, however, progress in symbolic AI seemed to stall and many
believed that symbolic systems would never be able to imitate all the processes of
human cognition, especially perception, robotics, learning and pattern recognition.
A number of researchers began to look into "sub-symbolic" approaches to specific
AI problems.

Bottom-up, embodied, situated, behavior-based or nouvelle AI

Researchers from the related field of robotics, such as Rodney Brooks,


rejected symbolic AI and focused on the basic engineering problems that
would allow robots to move and survive. Their work revived the non-
symbolic viewpoint of the early cybernetics researchers of the 50s and
reintroduced the use of control theory in AI. These approaches are also
conceptually related to the embodied mind thesis.

Computational Intelligence

Interest in neural networks and "connectionism" was revived by David


Rumelhart and others in the middle 1980s. These and other sub-symbolic
approaches, such as fuzzy systems and evolutionary computation, are now
studied collectively by the emerging discipline of computational intelligence.

Statistical AI

In the 1990s, AI researchers developed sophisticated mathematical tools to solve


specific subproblems. These tools are truly scientific, in the sense that their results
are both measurable and verifiable, and they have been responsible for many of
AI's recent successes. The shared mathematical language has also permitted a
high level of collaboration with more established fields (like mathematics,
economics or operations research). Russell & Norvig (2003) describe this
movement as nothing less than a "revolution" and "the victory of the neats."

Integrating the approaches

Intelligent agent paradigm

An intelligent agent is a system that perceives its environment and takes


actions which maximizes its chances of success. The simplest intelligent
agents are programs that solve specific problems. The most complicated
intelligent agents are rational, thinking human beings. The paradigm gives
researchers license to study isolated problems and find solutions that are both
verifiable and useful, without agreeing on one single approach. An agent that
solves a specific problem can use any approach that works — some agents are
symbolic and logical, some are sub-symbolic neural networks and others may
use new approaches. The paradigm also gives researchers a common language

THE UNIVERSITY OF LAHORE 34/63


Computational finance

to communicate with other fields—such as decision theory and economics—


that also use concepts of abstract agents. The intelligent agent paradigm
became widely accepted during the 1990s.

An agent architecture or cognitive architecture

Researchers have designed systems to build intelligent systems out of


interacting intelligent agents in a multi-agent system. A system with both
symbolic and sub-symbolic components is a hybrid intelligent system, and the
study of such systems is artificial intelligence systems integration. A
hierarchical control system provides a bridge between sub-symbolic AI at its
lowest, reactive levels and traditional symbolic AI at its highest levels, where
relaxed time constraints permit planning and world modelling. Rodney
Brooks' subsumption architecture was an early proposal for such a hierarchical
system.

Tools of CI research

In the course of 50 years of research, AI has developed a large number of tools to


solve the most difficult problems in computer science. A few of the most general
of these methods are discussed below.

Search and optimization

Many problems in AI can be solved in theory by intelligently searching through


many possible solutions: Reasoning can be reduced to performing a search. For
example, logical proof can be viewed as searching for a path that leads from
premises to conclusions, where each step is the application of an inference rule.
Planning algorithms search through trees of goals and subgoals, attempting to find
a path to a target goal, a process called means-ends analysis. Robotics algorithms
for moving limbs and grasping objects use local searches in configuration space.
Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches are rarely sufficient for most real world problems: the
search space (the number of places to search) quickly grows to astronomical
numbers. The result is a search that is too slow or never completes. The solution,
for many problems, is to use "heuristics" or "rules of thumb" that eliminate
choices that are unlikely to lead to the goal (called "pruning the search tree").
Heuristics supply the program with a "best guess" for what path the solution lies
on.

A very different kind of search came to prominence in the 1990s, based on the
mathematical theory of optimization. For many problems, it is possible to begin
the search with some form of a guess and then refine the guess incrementally until

THE UNIVERSITY OF LAHORE 35/63


Computational finance

no more refinements can be made. These algorithms can be visualized as blind


hill climbing: we begin the search at a random point on the landscape, and then,
by jumps or steps, we keep moving our guess uphill, until we reach the top. Other
optimization algorithms are simulated annealing, beam search and random
optimization.

Evolutionary computation uses a form of optimization search. For example, they


may begin with a population of organisms (the guesses) and then allow them to
mutate and recombine, selecting only the fittest to survive each generation
(refining the guesses). Forms of evolutionary computation include swarm
intelligence algorithms (such as ant colony or particle swarm optimization) and
evolutionary algorithms (such as genetic algorithms and genetic programming).

Logic

Logic was introduced into AI research by John McCarthy in his 1958 Advice
Taker proposal. In 1963, J. Alan Robinson discovered a simple, complete and
entirely algorithmic method for logical deduction which can easily be performed
by digital computers. However, a naive implementation of the algorithm quickly
leads to a combinatorial explosion or an infinite loop. In 1974, Robert Kowalski
suggested representing logical expressions as Horn clauses (statements in the
form of rules: "if p then q"), which reduced logical deduction to backward
chaining or forward chaining. This greatly alleviated (but did not eliminate) the
problem.

Logic is used for knowledge representation and problem solving, but it can be
applied to other problems as well. For example, the satplan algorithm uses logic
for planning, and inductive logic programming is a method for learning. There are
several different forms of logic used in AI research.

• Propositional or sentential logic is the logic of statements which can be


true or false.
• First-order logic also allows the use of quantifiers and predicates, and can
express facts about objects, their properties, and their relations
with each other.
• Fuzzy logic, a version of first-order logic which allows the truth of a
statement to be represented as a value between 0 and 1, rather than
simply True (1) or False (0). Fuzzy systems can be used for uncertain
reasoning and have been widely used in modern industrial and
consumer product control systems.
• Default logics, non-monotonic logics and circumscription are forms of
logic designed to help with default reasoning and the qualification
problem.
• Several extensions of logic have been designed to handle specific domains
of knowledge, such as: description logics; situation calculus, event

THE UNIVERSITY OF LAHORE 36/63


Computational finance

calculus and fluent calculus (for representing events and time); causal
calculus; belief calculus; and modal logics.

Probabilistic methods for uncertain reasoning

Many problems in AI (in reasoning, planning, learning, perception and robotics)


require the agent to operate with incomplete or uncertain information. Starting in
the late 80s and early 90s, Judea Pearl and others championed the use of methods
drawn from probability theory and economics to devise a number of powerful
tools to solve these problems.

Bayesian networks are a very general tool that can be used for a large number of
problems: reasoning (using the Bayesian inference algorithm), learning (using the
expectation-maximization algorithm), planning (using decision networks) and
perception (using dynamic Bayesian networks).

Probabilistic algorithms can also be used for filtering, prediction, smoothing and
finding explanations for streams of data, helping perception systems to analyze
processes that occur over time (e.g., hidden Markov models or Kalman filters).

A key concept from the science of economics is "utility": a measure of how


valuable something is to an intelligent agent. Precise mathematical tools have
been developed that analyze how an agent can make choices and plan, using
decision theory, decision analysis, information value theory. These tools include
models such as Markov decision processes, dynamic decision networks, game
theory and mechanism design.

Classifiers and statistical learning methods

The simplest AI applications can be divided into two types: classifiers ("if shiny
then diamond") and controllers ("if shiny then pick up"). Controllers do however
also classify conditions before inferring actions, and therefore classification forms
a central part of many AI systems.

Classifiers are functions that use pattern matching to determine a closest match.
They can be tuned according to examples, making them very attractive for use in
AI. These examples are known as observations or patterns. In supervised learning,
each pattern belongs to a certain predefined class. A class can be seen as a
decision that has to be made. All the observations combined with their class labels
are known as a data set.

When a new observation is received, that observation is classified based on


previous experience. A classifier can be trained in various ways; there are many
statistical and machine learning approaches.

THE UNIVERSITY OF LAHORE 37/63


Computational finance

A wide range of classifiers are available, each with its strengths and weaknesses.
Classifier performance depends greatly on the characteristics of the data to be
classified. There is no single classifier that works best on all given problems; this
is also referred to as the "no free lunch" theorem. Various empirical tests have
been performed to compare classifier performance and to find the characteristics
of data that determine classifier performance. Determining a suitable classifier for
a given problem is however still more an art than science.

The most widely used classifiers are the neural network, kernel methods such as
the support vector machine, k-nearest neighbor algorithm, Gaussian mixture
model, naive Bayes classifier, and decision tree. The performance of these
classifiers have been compared over a wide range of classification tasks in order
to find data characteristics that determine classifier performance.

Neural networks

A neural network is an interconnected group of nodes, akin to the vast network of


neurons in the human brain.

The study of artificial neural networks began in the decade before the field AI
research was founded. In the 1960s Frank Rosenblatt developed an important
early version, the perceptron. Paul Werbos developed the backpropagation
algorithm for multilayer perceptrons in 1974, which led to a renaissance in neural
network research and connectionism in general in the middle 1980s. The Hopfield
net, a form of attractor network, was first described by John Hopfield in 1982.

Common network architectures which have been developed include the


feedforward neural network, the radial basis network, the Kohonen self-
organizing map and various recurrent neural networks. Neural networks are
applied to the problem of learning, using such techniques as Hebbian learning,
competitive learning and the relatively new architectures of Hierarchical
Temporal Memory and Deep Belief Networks.

Control theory

Control theory, the grandchild of cybernetics, has many important applications,


especially in robotics.

Specialized languages

AI researchers have developed several specialized languages for AI research:

• IPL was the first language developed for artificial intelligence. It includes
features intended to support programs that could perform general problem
solving, including lists, associations, schemas (frames), dynamic memory

THE UNIVERSITY OF LAHORE 38/63


Computational finance

allocation, data types, recursion, associative retrieval, functions as


arguments, generators (streams), and cooperative multitasking.
• Lisp is a practical mathematical notation for computer programs based on
lambda calculus. Linked lists are one of Lisp languages' major data
structures, and Lisp source code is itself made up of lists. As a result, Lisp
programs can manipulate source code as a data structure, giving rise to the
macro systems that allow programmers to create new syntax or even new
domain-specific programming languages embedded in Lisp. There are
many dialects of Lisp in use today.
• Prolog is a declarative language where programs are expressed in terms of
relations, and execution occurs by running queries over these relations.
Prolog is particularly useful for symbolic reasoning, database and
language parsing applications. Prolog is widely used in AI today.
• STRIPS is a language for expressing automated planning problem
instances. It expresses an initial state, the goal states, and a set of actions.
For each action preconditions (what must be established before the action
is performed) and postconditions (what is established after the action is
performed) are specified.
• Planner is a hybrid between procedural and logical languages. It gives a
procedural interpretation to logical sentences where implications are
interpreted with pattern-directed inference.

AI applications are also often written in standard languages like C++ and
languages designed for mathematics, such as MATLAB and Lush.

Evaluating artificial intelligence

How can one determine if an agent is intelligent? In 1950, Alan Turing proposed a
general procedure to test the intelligence of an agent now known as the Turing
test. This procedure allows almost all the major problems of artificial intelligence
to be tested. However, it is a very difficult challenge and at present all agents fail.

Artificial intelligence can also be evaluated on specific problems such as small


problems in chemistry, hand-writing recognition and game-playing. Such tests
have been termed subject matter expert Turing tests. Smaller problems provide
more achievable goals and there are an ever-increasing number of positive results.

The broad classes of outcome for an AI test are:

• optimal: it is not possible to perform better


• strong super-human: performs better than all humans
• super-human: performs better than most humans
• sub-human: performs worse than most humans

THE UNIVERSITY OF LAHORE 39/63


Computational finance

For example, performance at checkers (draughts) is optimal, performance at chess


is super-human and nearing strong super-human, and performance at many
everyday tasks performed by humans is sub-human.

THE UNIVERSITY OF LAHORE 40/63


Computational finance

Applications of artificial intelligence

Artificial intelligence has successfully been used in a wide range of fields


including medical diagnosis, stock trading, robot control, law, scientific
discovery, video games and toys. Frequently, when a technique reaches
mainstream use it is no longer considered artificial intelligence, sometimes
described as the AI effect. It may also become integrated into artificial life.

Areas of application

• Simulated annealing
• Machine learning
• Artificial immune systems
• Expert systems
• Hybrid intelligent systems
• Hybrid logic
• Simulated reality
• Soft computing
• Bayesian networks
• Chaos theory
• Ant colony optimization
• Particle swarm optimisation
• Cognitive robotics
• Developmental robotics
• Evolutionary robotics
• Intelligent agents
• Knowledge-Based Engineering
• Type-2 fuzzy sets and systems

Remotely related topics:

• Bioinformatics and Bioengineering


• Computational finance and Computational economics
• Intelligent systems
• Emergence
• Data mining
• Concept mining

Software

• Computational Intelligence Library (CILib)


• OAT (Optimization Algorithm Toolkit): A set of standard computational
intelligence optimization algorithms and problems in Java.

THE UNIVERSITY OF LAHORE 41/63


Computational finance

Organizations

• IEEE Computational Intelligence Society


• The Computational Intelligence and Machine Learning Virtual
Community

THE UNIVERSITY OF LAHORE 42/63


Computational finance

Computer simulation

A computer simulation, a computer model or a computational model is a


computer program, or network of computers, that attempts to simulate an abstract
model of a particular system. Computer simulations have become a useful part of
mathematical modeling of many natural systems in physics (computational
physics), chemistry and biology, human systems in economics, psychology, and
social science and in the process of engineering new technology, to gain insight
into the operation of those systems, or to observe their behavior.

Computer simulations vary from computer programs that run a few minutes, to
network-based groups of computers running for hours, to ongoing simulations
that run for days. The scale of events being simulated by computer simulations
has far exceeded anything possible (or perhaps even imaginable) using the
traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-
battle simulation, of one force invading another, involved the modeling of 66,239
tanks, trucks and other vehicles on simulated terrain around Kuwait, using
multiple supercomputers in the DoD High Performance Computer Modernization
Program; a 1-billion-atom model of material deformation (2002); a 2.64-million-
atom model of the complex maker of protein in all organisms, a ribosome, in
2005; and the Blue Brain project at EPFL (Switzerland), began in May 2005, to
create the first computer simulation of the entire human brain, right down to the
molecular level.

History

Computer simulation was developed hand-in-hand with the rapid growth of the
computer, following its first large-scale deployment during the Manhattan Project
in World War II to model the process of nuclear detonation. It was a simulation of
12 hard spheres using a Monte Carlo algorithm. Computer simulation is often
used as an adjunct to, or substitution for, modeling systems for which simple
closed form analytic solutions are not possible. There are many different types of
computer simulation; the common feature they all share is the attempt to generate
a sample of representative scenarios for a model in which a complete enumeration
of all possible states of the model would be prohibitive or impossible. Computer
models were initially used as a supplement for other arguments, but their use later
became rather widespread.

Data preparation

The data input/output for the simulation can be either through formatted textfiles
or a pre- and postprocessor.

THE UNIVERSITY OF LAHORE 43/63


Computational finance

Types

Computer models can be classified according to several independent pairs of


attributes, including:

• Stochastic or deterministic (and as a special case of deterministic, chaotic)


for examples of stochastic vs. deterministic simulations
• Steady-state or dynamic
• Continuous or discrete (and as an important special case of discrete,
discrete event or DE models)
• Local or distributed.

These attributes may be arbitrarily combined to form terminology that describes


simulation types, such as "continuous dynamic simulations" or "discrete dynamic
simulations."

For example:

• Steady-state models use equations defining the relationships between


elements of the modeled system and attempt to find a state in which the
system is in equilibrium. Such models are often used in simulating
physical systems, as a simpler modeling case before dynamic simulation is
attempted.
• Dynamic simulations model changes in a system in response to (usually
changing) input signals.
• Stochastic models use random number generators to model chance or
random events;
• A discrete event simulation (DES) manages events in time. Most
computer, logic-test and fault-tree simulations are of this type. In this type
of simulation, the simulator maintains a queue of events sorted by the
simulated time they should occur. The simulator reads the queue and
triggers new events as each event is processed. It is not important to
execute the simulation in real time. It's often more important to be able to
access the data produced by the simulation, to discover logic defects in the
design, or the sequence of events.
• A continuous dynamic simulation performs numerical solution of
differential-algebraic equations or differential equations (either partial or
ordinary). Periodically, the simulation program solves all the equations,
and uses the numbers to change the state and output of the simulation.
Applications include flight simulators, construction and management
simulation games, chemical process modeling, and simulations of
electrical circuits. Originally, these kinds of simulations were actually
implemented on analog computers, where the differential equations could
be represented directly by various electrical components such as op-amps.

THE UNIVERSITY OF LAHORE 44/63


Computational finance

By the late 1980s, however, most "analog" simulations were run on


conventional digital computers that emulate the behavior of an analog
computer.
• A special type of discrete simulation which does not rely on a model with
an underlying equation, but can nonetheless be represented formally, is
agent-based simulation. In agent-based simulation, the individual entities
(such as molecules, cells, trees or consumers) in the model are represented
directly (rather than by their density or concentration) and possess an
internal state and set of behaviors or rules which determine how the
agent's state is updated from one time-step to the next.
• Distributed models run on a network of interconnected computers,
possibly through the Internet. Simulations dispersed across multiple host
computers like this are often referred to as "distributed simulations". There
are several standards for distributed simulation, including Aggregate Level
Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the
High Level Architecture (simulation) (HLA) and the Test and Training
Enabling Architecture (TENA).

CGI computer simulation

Formerly, the output data from a computer simulation was sometimes presented in
a table, or a matrix, showing how data was affected by numerous changes in the
simulation parameters. The use of the matrix format was related to traditional use
of the matrix concept in mathematical models; however, psychologists and
others noted that humans could quickly perceive trends by looking at graphs or
even moving-images or motion-pictures generated from the data, as displayed by
computer-generated-imagery (CGI) animation. Although observers couldn't
necessarily read out numbers, or spout math formulas, from observing a moving
weather chart, they might be able to predict events (and "see that rain was headed
their way"), much faster than scanning tables of rain-cloud coordinates. Such
intense graphical displays, which transcended the world of numbers and formulae,
sometimes also led to output that lacked a coordinate grid or omitted timestamps,
as if straying too far from numeric data displays. Today, weather forecasting
models tend to balance the view of moving rain/snow clouds against a map that
uses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumor
might shrink or change, during an extended period of medical treatment,
presenting the passage of time as a spinning view of the visible human head, as
the tumor changes.

Other applications of CGI computer simulations are being developed to


graphically display large amounts of data, in motion, as changes occur during a
simulation run.

THE UNIVERSITY OF LAHORE 45/63


Computational finance

Computer simulation in science

Generic examples of types of computer simulations in science, which are derived


from an underlying mathematical description:

• a numerical simulation of differential equations which cannot be solved


analytically, theories which involve continuous systems such as
phenomena in physical cosmology, fluid dynamics (e.g. climate models,
roadway noise models, roadway air dispersion models), continuum
mechanics and chemical kinetics fall into this category.
• a stochastic simulation, typically used for discrete systems where events
occur probabilistically, and which cannot be described directly with
differential equations (this is a discrete simulation in the above sense).
Phenomena in this category include genetic drift, biochemical or gene
regulatory networks with small numbers of molecules. (see also: Monte
Carlo method).

Specific examples of computer simulations follow:

• statistical simulations based upon an agglomeration of a large number of


input profiles, such as the forecasting of equilibrium temperature of
receiving waters, allowing the gamut of meteorological data to be input for
a specific locale. This technique was developed for thermal pollution
forecasting .
• agent based simulation has been used effectively in ecology, where it is
often called individual based modeling and has been used in situations for
which individual variability in the agents cannot be neglected, such as
population dynamics of salmon and trout (most purely mathematical
models assume all trout behave identically).
• time stepped dynamic model. In hydrology there are several such
hydrology transport models such as the SWMM and DSSAM Models
developed by the U.S. Environmental Protection Agency for river water
quality forecasting.
• computer simulations have also been used to formally model theories of
human cognition and performance, e.g. ACT-R
• computer simulation using molecular modeling for drug discovery
• Computational fluid dynamics simulations are used to simulate the
behaviour of flowing air, water and other fluids. There are one-, two- and
three- dimensional models used. A one dimensional model might simulate
the effects of water hammer in a pipe. A two-dimensional model might be
used to simulate the drag forces on the cross-section of an aeroplane wing.
A three-dimensional simulation might estimate the heating and cooling
requirements of a large building.

THE UNIVERSITY OF LAHORE 46/63


Computational finance

• An understanding of statistical thermodynamic molecular theory is


fundamental to the appreciation of molecular solutions. Development of
the Potential Distribution Theorem (PDT) allows one to simplify this
complex subject to down-to-earth presentations of molecular theory.

Simulation versus modeling

Traditionally, forming large models (spelt 'modeling' in American English) of


systems has been via a mathematical model, which attempts to find analytical
solutions to problems and thereby enable the prediction of the behavior of the
system from a set of parameters and initial conditions.

While computer simulations might use some algorithms from purely


mathematical models, computers can combine simulations with reality or actual
events, such as generating input responses, to simulate test subjects who are no
longer present. Whereas the missing test subjects are being modeled/simulated,
the system they use could be the actual equipment, revealing performance limits
or defects in long-term use by the simulated users.

Note that the term computer simulation is broader than computer modeling, which
implies that all aspects are being modeled in the computer representation.
However, computer simulation also includes generating inputs from simulated
users to run actual computer software or equipment, with only part of the system
being modeled: an example would be flight simulators which can run machines as
well as actual flight software.

Computer simulations are used in many fields, including science, technology,


entertainment, and business planning and scheduling.

Simulation environments for physics and engineering

Graphical environments to design simulations have been developed. Special care


was taken to handle events (situations in which the simulation equations are not
valid and have to be changed). The open project Open Source Physics was started
to develop reusable libraries for simulations in Java, together with Easy Java
Simulations, a complete graphical environment that generates code based on these
libraries.

Computer simulation in practical contexts

Computer simulations are used in a wide variety of practical contexts, such as:

• analysis of air pollutant dispersion using atmospheric dispersion modeling

THE UNIVERSITY OF LAHORE 47/63


Computational finance

• design of complex systems such as aircraft and also logistics systems.


• design of Noise barriers to effect roadway noise mitigation
• flight simulators to train pilots
• weather forecasting
• forecasting of prices on financial markets (for example Adaptive Modeler)
• behavior of structures (such as buildings and industrial parts) under stress
and other conditions
• design of industrial processes, such as chemical processing plants
• Strategic Management and Organizational Studies
• Reservoir simulation for the petroleum engineering to model the
subsurface reservoir
• Process Engineering Simulation tools.
• Robot simulators for the design of robots and robot control algorithms
• Traffic engineering to plan or redesign parts of the street network from
single junctions over cities to a national highway network, see for example
VISSIM.
• modeling car crashes to test safety mechanisms in new vehicle models

The reliability and the trust people put in computer simulations depends on the
validity of the simulation model, therefore verification and validation are of
crucial importance in the development of computer simulations. Another
important aspect of computer simulations is that of reproducibility of the results,
meaning that a simulation model should not provide a different answer for each
execution. Although this might seem obvious, this is a special point of attention in
stochastic simulations, where random numbers should actually be semi-random
numbers. An exception to reproducibility are human in the loop simulations such
as flight simulations and computer games. Here a human is part of the simulation
and thus influences the outcome in a way that is hard if not impossible to
reproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety features in


new designs. By building a copy of the car in a physics simulation environment,
they can save the hundreds of thousands of dollars that would otherwise be
required to build a unique prototype and test it. Engineers can step through the
simulation milliseconds at a time to determine the exact stresses being put upon
each section of the prototype.

Computer graphics can be used to display the results of a computer simulation.


Animations can be used to experience a simulation in real-time e.g. in training
simulations. In some cases animations may also be useful in faster than real-time
or even slower than real-time modes. For example, faster than real-time
animations can be useful in visualizing the buildup of queues in the simulation of
humans evacuating a building. Furthermore, simulation results are often
aggregated into static images using various ways of scientific visualization.

THE UNIVERSITY OF LAHORE 48/63


Computational finance

In debugging, simulating a program execution under test (rather than executing


natively) can detect far more errors than the hardware itself can detect and, at the
same time, log useful debugging information such as instruction trace, memory
alterations and instruction counts. This technique can also detect buffer overflow
and similar "hard to detect" errors as well as produce performance information
and tuning data.

Pitfalls

Although sometimes ignored in computer simulations, it is very important to


perform sensitivity analysis to ensure that the accuracy of the results are properly
understood. For example, the probabilistic risk analysis of factors determining the
success of an oilfield exploration program involves combining samples from a
variety of statistical distributions using the Monte Carlo method. If, for instance,
one of the key parameters (i.e. the net ratio of oil-bearing strata) is known to only
one significant figure, then the result of the simulation might not be more precise
than one significant figure, although it might (misleadingly) be presented as
having four significant figures.

Areas of application

• ACT-R
• Articulatory synthesis
• Artificial life
• CAVE
• Computer-aided design
• Computer simulation and organizational studies
• Dry Lab
• Earth Simulator
• Emulator
• Experiment in silico
• Global climate model
• Ice sheet model
• List of computer simulation software
• Mathematical model
• MapleSim
• Molecular dynamics
• SimApp
• Simcyp Simulator
• Simulated reality
• Social simulation
• Solver (computer science)
• Web based simulation

Organizations

THE UNIVERSITY OF LAHORE 49/63


Computational finance

• EUROSIM - Federation of European Simulation Societies


• Institute for Simulation and Training, University of Central Florida
• Simulation Interoperability Standards Organization
• The Society for Modeling and Simulation International (Formerly the
Society of Computer Simulation)
• United States Defense Modeling and Simulation Office
• The System Dynamics Society
• The Computational Modelling Group at Cambridge University's
Department of Chemical Engineering
• Liophant Simulation
• United Simulation Team - Genoa University
• High Performance Systems Group at the University of Warwick, UK

Education

• Simulation-An Enabling Technology in Software Engineering


• Sabanci University School of Languages Podcasts: Computer Simulation
by Prof. David M. Goldsman
• IMTEK Mathematica Supplement (IMS) (some Mathematica-specific
tutorials here)
• The Creative Learning Exchange
• McLeod Institute of Simulation Science

Examples

• WARPP Distributed/Parallel System Simulation Toolkit written by the


High Performance Systems Group at the University of Warwick
• A portfolio of free public simulations from the University of Florida
• Integrated Land Use, Transportation, Environment, (ILUTE) Modeling
System
• Nanorobotics Simulation - Computational Nanomechatronics Lab. at
Center for Automation in Nanobiotech (CAN)
• Online traffic simulation
• Adaptive Modeler - simulation models for price forecasting of financial
markets
• Shakemovie Caltech's Online Seismic Event Simulation
• DIG - Demographics, Investment and Company Growth Simulation
• Global Politics Simulation
• Industrial & Educational Examples of Modelling & Simulation
• Generalized online simulation utility
• Catchment Modelling Toolkit
• Cellular Automata for Simulation in Games

THE UNIVERSITY OF LAHORE 50/63


Computational finance

Financial Risk

Risk Management is the identification, assessment, and prioritization of risks


followed by coordinated and economical application of resources to minimize,
monitor, and control the probability and/or impact of unfortunate events.. Risks
can come from uncertainty in financial markets, project failures, legal liabilities,
credit risk, accidents, natural causes and disasters as well as deliberate attacks
from an adversary. Several risk management standards have been developed
including the Project Management Institute, the National Institute of Science and
Technology, actuarial societies, and ISO standards. Methods, definitions and
goals vary widely according to whether the risk management method is in the
context of project management, security, engineering, industrial processes,
financial portfolios, actuarial assessments, or public health and safety.

For the most part, these methodologies consist of the following elements,
performed, more or less, in the following order.

1. identify, characterize, and assess threats


2. assess the vulnerability of critical assets to specific threats
3. determine the risk (i.e. the expected consequences of specific types of
attacks on specific assets)
4. identify ways to reduce those risks
5. prioritize risk reduction measures based on a strategy

The strategies to manage risk include transferring the risk to another party,
avoiding the risk, reducing the negative effect of the risk, and accepting some or
all of the consequences of a particular risk.

Introduction

This section provides an introduction to the principles of risk management. The


vocabulary of risk management is defined in ISO Guide 73, "Risk management.
Vocabulary".

In ideal risk management, a prioritization process is followed whereby the risks


with the greatest loss and the greatest probability of occurring are handled first,
and risks with lower probability of occurrence and lower loss are handled in
descending order. In practice the process can be very difficult, and balancing
between risks with a high probability of occurrence but lower loss versus a risk
with high loss but lower probability of occurrence can often be mishandled.

Intangible risk management identifies a new type of a risk that has a 100%
probability of occurring but is ignored by the organization due to a lack of
identification ability. For example, when deficient knowledge is applied to a

THE UNIVERSITY OF LAHORE 51/63


Computational finance

situation, a knowledge risk materialises. Relationship risk appears when


ineffective collaboration occurs. Process-engagement risk may be an issue when
ineffective operational procedures are applied. These risks directly reduce the
productivity of knowledge workers, decrease cost effectiveness, profitability,
service, quality, reputation, brand value, and earnings quality. Intangible risk
management allows risk management to create immediate value from the
identification and reduction of risks that reduce productivity.

Risk management also faces difficulties allocating resources. This is the idea of
opportunity cost. Resources spent on risk management could have been spent on
more profitable activities. Again, ideal risk management minimizes spending
while maximizing the reduction of the negative effects of risks.

Principles of risk management

The International Organization for Standardization identifies the following


principles of risk management:

• Risk management should create value.


• Risk management should be an integral part of organizational processes.
• Risk management should be part of decision making.
• Risk management should explicitly address uncertainty.
• Risk management should be systematic and structured.
• Risk management should be based on the best available information.
• Risk management should be tailored.
• Risk management should take into account human factors.
• Risk management should be transparent and inclusive.
• Risk management should be dynamic, iterative and responsive to change.
• Risk management should be capable of continual improvement and
enhancement.

Process

According to the standard ISO/DIS 31000 "Risk management -- Principles and


guidelines on implementation", the process of risk management consists of
several steps as follows:

THE UNIVERSITY OF LAHORE 52/63


Computational finance

Establishing the context

Establishing the context involves

1. Identification of risk in a selected domain of interest


2. Planning the remainder of the process.
3. Mapping out the following:
o the social scope of risk management
o the identity and objectives of stakeholders
o the basis upon which risks will be evaluated, constraints.
4. Defining a framework for the activity and an agenda for
identification.
5. Developing an analysis of risks involved in the process.
6. Mitigation of risks using available technological, human and
organizational resources.

Identification

After establishing the context, the next step in the process of managing risk is to
identify potential risks. Risks are about events that, when triggered, cause
problems. Hence, risk identification can start with the source of problems, or with
the problem itself.

• Source analysis Risk sources may be internal or external to the system


that is the target of risk management.

Examples of risk sources are: stakeholders of a project, employees of a company


or the weather over an airport.

• Problem analysis Risks are related to identified threats. For example: the
threat of losing money, the threat of abuse of privacy information or the
threat of accidents and casualties. The threats may exist with various
entities, most important with shareholders, customers and legislative
bodies such as the government.

When either source or problem is known, the events that a source may trigger or
the events that can lead to a problem can be investigated. For example:
stakeholders withdrawing during a project may endanger funding of the project;
privacy information may be stolen by employees even within a closed network;
lightning striking a Boeing 747 during takeoff may make all people onboard
immediate casualties.

THE UNIVERSITY OF LAHORE 53/63


Computational finance

The chosen method of identifying risks may depend on culture, industry practice
and compliance. The identification methods are formed by templates or the
development of templates for identifying source, problem or event. Common risk
identification methods are:

• Objectives-based risk identification Organizations and project teams


have objectives. Any event that may endanger achieving an objective
partly or completely is identified as risk.
• Scenario-based risk identification In scenario analysis different
scenarios are created. The scenarios may be the alternative ways to
achieve an objective, or an analysis of the interaction of forces in, for
example, a market or battle. Any event that triggers an undesired scenario
alternative is identified as risk - see Futures Studies for methodology used
by Futurists.
• Taxonomy-based risk identification The taxonomy in taxonomy-based
risk identification is a breakdown of possible risk sources. Based on the
taxonomy and knowledge of best practices, a questionnaire is compiled.
The answers to the questions reveal risks. Taxonomy-based risk
identification in software industry can be found in CMU/SEI-93-TR-6.
• Common-risk checking In several industries lists with known risks are
available. Each risk in the list can be checked for application to a
particular situation. An example of known risks in the software industry is
the Common Vulnerability and Exposures list found at
http://cve.mitre.org.
• Risk charting (risk mapping) This method combines the above
approaches by listing Resources at risk, Threats to those resources
Modifying Factors which may increase or decrease the risk and
Consequences it is wished to avoid. Creating a matrix under these
headings enables a variety of approaches. One can begin with resources
and consider the threats they are exposed to and the consequences of each.
Alternatively one can start with the threats and examine which resources
they would affect, or one can begin with the consequences and determine
which combination of threats and resources would be involved to bring
them about.

Assessment

Once risks have been identified, they must then be assessed as to their potential
severity of loss and to the probability of occurrence. These quantities can be either
simple to measure, in the case of the value of a lost building, or impossible to
know for sure in the case of the probability of an unlikely event occurring.
Therefore, in the assessment process it is critical to make the best educated
guesses possible in order to properly prioritize the implementation of the risk
management plan.

THE UNIVERSITY OF LAHORE 54/63


Computational finance

The fundamental difficulty in risk assessment is determining the rate of


occurrence since statistical information is not available on all kinds of past
incidents. Furthermore, evaluating the severity of the consequences (impact) is
often quite difficult for immaterial assets. Asset valuation is another question that
needs to be addressed. Thus, best educated opinions and available statistics are the
primary sources of information. Nevertheless, risk assessment should produce
such information for the management of the organization that the primary risks
are easy to understand and that the risk management decisions may be prioritized.
Thus, there have been several theories and attempts to quantify risks. Numerous
different risk formulae exist, but perhaps the most widely accepted formula for
risk quantification is:

Rate of occurrence multiplied by the impact of the event equals risk

Later research has shown that the financial benefits of risk management are less
dependent on the formula used but are more dependent on the frequency and how
risk assessment is performed.

In business it is imperative to be able to present the findings of risk assessments in


financial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for
presenting risks in financial terms. The Courtney formula was accepted as the
official risk analysis method for the US governmental agencies. The formula
proposes calculation of ALE (annualised loss expectancy) and compares the
expected loss value to the security control implementation costs (cost-benefit
analysis).

Potential risk treatments

Once risks have been identified and assessed, all techniques to manage the risk
fall into one or more of these four major categories:

• Avoidance (eliminate)
• Reduction (mitigate)
• Transfer (outsource or insure)
• Retention (accept and budget)

Ideal use of these strategies may not be possible. Some of them may involve
trade-offs that are not acceptable to the organization or person making the risk
management decisions. Another source, from the US Department of Defense,
Defense Acquisition University, calls these categories ACAT, for Avoid, Control,
Accept, or Transfer. This use of the ACAT acronym is reminiscent of another
ACAT (for Acquisition Category) used in US Defense industry procurements, in
which Risk Management figures prominently in decision making and planning.

THE UNIVERSITY OF LAHORE 55/63


Computational finance

Risk avoidance

Includes not performing an activity that could carry risk. An example would be
not buying a property or business in order to not take on the liability that comes
with it. Another would be not flying in order to not take the risk that the airplane
were to be hijacked. Avoidance may seem the answer to all risks, but avoiding
risks also means losing out on the potential gain that accepting (retaining) the risk
may have allowed. Not entering a business to avoid the risk of loss also avoids the
possibility of earning profits.

Risk reduction

Involves methods that reduce the severity of the loss or the likelihood of the loss
from occurring. For example, sprinklers are designed to put out a fire to reduce
the risk of loss by fire. This method may cause a greater loss by water damage
and therefore may not be suitable. Halon fire suppression systems may mitigate
that risk, but the cost may be prohibitive as a strategy. Risk management may also
take the form of a set policy, such as only allow the use of secured IM platforms
(like Brosix) and not allowing personal IM platforms (like AIM) to be used in
order to reduce the risk of data leaks.

Modern software development methodologies reduce risk by developing and


delivering software incrementally. Early methodologies suffered from the fact that
they only delivered software in the final phase of development; any problems
encountered in earlier phases meant costly rework and often jeopardized the
whole project. By developing in iterations, software projects can limit effort
wasted to a single iteration.

Outsourcing could be an example of risk reduction if the outsourcer can


demonstrate higher capability at managing or reducing risks. In this case
companies outsource only some of their departmental needs. For example, a
company may outsource only its software development, the manufacturing of
hard goods, or customer support needs to another company, while handling the
business management itself. This way, the company can concentrate more on
business development without having to worry as much about the manufacturing
process, managing the development team, or finding a physical location for a call
center.

Risk retention

Involves accepting the loss when it occurs. True self insurance falls in this
category. Risk retention is a viable strategy for small risks where the cost of
insuring against the risk would be greater over time than the total losses sustained.
All risks that are not avoided or transferred are retained by default. This includes
risks that are so large or catastrophic that they either cannot be insured against or
the premiums would be infeasible. War is an example since most property and

THE UNIVERSITY OF LAHORE 56/63


Computational finance

risks are not insured against war, so the loss attributed by war is retained by the
insured. Also any amounts of potential loss (risk) over the amount insured is
retained risk. This may also be acceptable if the chance of a very large loss is
small or if the cost to insure for greater coverage amounts is so great it would
hinder the goals of the organization too much.

Risk transfer

In the terminology of practitioners and scholars alike, the purchase of an


insurance contract is often described as a "transfer of risk." However, technically
speaking, the buyer of the contract generally retains legal responsibility for the
losses "transferred", meaning that insurance may be described more accurately as
a post-event compensatory mechanism. For example, a personal injuries insurance
policy does not transfer the risk of a car accident to the insurance company. The
risk still lies with the policy holder namely the person who has been in the
accident. The insurance policy simply provides that if an accident (the event)
occurs involving the policy holder then some compensation may be payable to the
policy holder that is commensurate to the suffering/damage.

Some ways of managing risk fall into multiple categories. Risk retention pools are
technically retaining the risk for the group, but spreading it over the whole group
involves transfer among individual members of the group. This is different from
traditional insurance, in that no premium is exchanged between members of the
group up front, but instead losses are assessed to all members of the group.

Create a risk-management plan

Select appropriate controls or countermeasures to measure each risk. Risk


mitigation needs to be approved by the appropriate level of management. For
example, a risk concerning the image of the organization should have top
management decision behind it whereas IT management would have the authority
to decide on computer virus risks.

The risk management plan should propose applicable and effective security
controls for managing the risks. For example, an observed high risk of computer
viruses could be mitigated by acquiring and implementing antivirus software. A
good risk management plan should contain a schedule for control implementation
and responsible persons for those actions.

According to ISO/IEC 27001, the stage immediately after completion of the Risk
Assessment phase consists of preparing a Risk Treatment Plan, which should
document the decisions about how each of the identified risks should be handled.
Mitigation of risks often means selection of security controls, which should be
documented in a Statement of Applicability, which identifies which particular
control objectives and controls from the standard have been selected, and why.

THE UNIVERSITY OF LAHORE 57/63


Computational finance

Implementation

Follow all of the planned methods for mitigating the effect of the risks. Purchase
insurance policies for the risks that have been decided to be transferred to an
insurer, avoid all risks that can be avoided without sacrificing the entity's goals,
reduce others, and retain the rest.

Review and evaluation of the plan

Initial risk management plans will never be perfect. Practice, experience, and
actual loss results will necessitate changes in the plan and contribute information
to allow possible different decisions to be made in dealing with the risks being
faced.

Risk analysis results and management plans should be updated periodically.


There are two primary reasons for this:

1. to evaluate whether the previously selected security controls are still


applicable and effective, and
2. to evaluate the possible risk level changes in the business environment.
For example, information risks are a good example of rapidly
changing business environment.

Limitations

If risks are improperly assessed and prioritized, time can be wasted in dealing
with risk of losses that are not likely to occur. Spending too much time assessing
and managing unlikely risks can divert resources that could be used more
profitably. Unlikely events do occur but if the risk is unlikely enough to occur it
may be better to simply retain the risk and deal with the result if the loss does in
fact occur. Qualitative risk assessment is subjective and lack consistency. The
primary justification for a formal risk assessment process is legal and
bureaucratic.

Prioritizing too highly the risk management processes could keep an organization
from ever completing a project or even getting started. This is especially true if
other work is suspended until the risk management process is considered
complete.

It is also important to keep in mind the distinction between risk and uncertainty.
Risk can be measured by impacts x probability.

Areas of risk management

THE UNIVERSITY OF LAHORE 58/63


Computational finance

As applied to corporate finance, risk management is the technique for measuring,


monitoring and controlling the financial or operational risk on a firm's balance
sheet. See value at risk.

The Basel II framework breaks risks into market risk (price risk), credit risk and
operational risk and also specifies methods for calculating capital requirements
for each of these components.

Enterprise risk management

In enterprise risk management, a risk is defined as a possible event or


circumstance that can have negative influences on the enterprise in question. Its
impact can be on the very existence, the resources (human and capital), the
products and services, or the customers of the enterprise, as well as external
impacts on society, markets, or the environment. In a financial institution,
enterprise risk management is normally thought of as the combination of credit
risk, interest rate risk or asset liability management, market risk, and operational
risk.

In the more general case, every probable risk can have a pre-formulated plan to
deal with its possible consequences (to ensure contingency if the risk becomes a
liability).

From the information above and the average cost per employee over time, or cost
accrual ratio, a project manager can estimate:

• the cost associated with the risk if it arises, estimated by multiplying


employee costs per unit time by the estimated time lost (cost impact, C
where C = cost accrual ratio * S).
• the probable increase in time associated with a risk (schedule variance due
to risk, Rs where Rs = P * S):
o Sorting on this value puts the highest risks to the schedule first.
This is intended to cause the greatest risks to the project to be
attempted first so that risk is minimized as quickly as possible.
o This is slightly misleading as schedule variances with a large P and
small S and vice versa are not equivalent. (The risk of the RMS
Titanic sinking vs. the passengers' meals being served at slightly
the wrong time).
• the probable increase in cost associated with a risk (cost variance due to
risk, Rc where Rc = P*C = P*CAR*S = P*S*CAR)
o sorting on this value puts the highest risks to the budget first.
o see concerns about schedule variance as this is a function of it, as
illustrated in the equation above.

THE UNIVERSITY OF LAHORE 59/63


Computational finance

Risk in a project or process can be due either to Special Cause Variation or


Common Cause Variation and requires appropriate treatment. That is to re-iterate
the concern about extremal cases not being equivalent in the list immediately
above.

THE UNIVERSITY OF LAHORE 60/63


Computational finance

Risk-management activities as applied to project management

• Planning how risk will be managed in the particular project. Plan should
include risk management tasks, responsibilities, activities and budget.
• Assigning a risk officer - a team member other than a project manager
who is responsible for foreseeing potential project problems. Typical
characteristic of risk officer is a healthy skepticism.
• Maintaining live project risk database. Each risk should have the
following attributes: opening date, title, short description, probability and
importance. Optionally a risk may have an assigned person responsible for
its resolution and a date by which the risk must be resolved.
• Creating anonymous risk reporting channel. Each team member should
have possibility to report risk that he foresees in the project.
• Preparing mitigation plans for risks that are chosen to be mitigated. The
purpose of the mitigation plan is to describe how this particular risk will
be handled – what, when, by who and how will it be done to avoid it or
minimize consequences if it becomes a liability.
• Summarizing planned and faced risks, effectiveness of mitigation
activities, and effort spent for the risk management.

Risk management and business continuity

Risk management is simply a practice of systematically selecting cost effective


approaches for minimising the effect of threat realization to the organization. All
risks can never be fully avoided or mitigated simply because of financial and
practical limitations. Therefore all organizations have to accept some level of
residual risks.

Whereas risk management tends to be preemptive, business continuity planning


(BCP) was invented to deal with the consequences of realised residual risks. The
necessity to have BCP in place arises because even very unlikely events will
occur if given enough time. Risk management and BCP are often mistakenly seen
as rivals or overlapping practices. In fact these processes are so tightly tied
together that such separation seems artificial. For example, the risk management
process creates important inputs for the BCP (assets, impact assessments, cost
estimates etc). Risk management also proposes applicable controls for the
observed risks. Therefore, risk management covers several areas that are vital for
the BCP process. However, the BCP process goes beyond risk management's
preemptive approach and moves on from the assumption that the disaster will
realize at some point.

THE UNIVERSITY OF LAHORE 61/63


Computational finance

Risk Communication

Risk communication refers to the idea that people are uncomfortable talking about
risk. People tend to put off admitting that risk is involved, as well as
communicating about risks and crises. Risk Communication can also be linked to
Crisis communication.

Benefits and Barriers of Risk Communication

"Some of the Benefits of risk communication include, improved collective and


individual decision making. Both the purpose of the exchange, and the nature of
the information have an impact on the benefits. Depending on the situation,
personal and community anxieties about environmental health risks can be
reduced or increased. For example, a goal might be raising concern about radon
and prompting action."

THE UNIVERSITY OF LAHORE 62/63


Computational finance

Areas of application

• Active Agenda • Insurance • Risk homeostasis


• Benefit risk • International Risk • Risk Management
• Business continuity Governance Agency
planning Council • Risk Management
• Chief Risk Officer • ISDA Authority
• Corporate • Legal Risk • Risk Management
governance • List of finance Information
• Cost overrun topics Systems
• Cost risk • List of project • Risk Management
• Critical chain management topics Research
• Earned value • Managerial risk Programme
management accounting • Risk register
• Enterprise Risk • Megaprojects • Roy's safety-first
Management • Megaprojects and criterion
• Environmental Risk risk • Society for Risk
Management • Occupational Analysis
Authority Health and Safety • Timeboxing
• Event chain • Operational risk • Social Risk
methodology management Management
• Financial risk • Optimism bias • Substantial
management • Outsourcing equivalence
• Futures Studies • Precautionary • Uncertainty
• Hazard prevention principle • Value at risk
• Process Safety • Viable System
• Hazop Management Model
• Project
management • Vulnerability
• Public Entity Risk assessment
Institute
• Reference class
forecasting
• Risk

• Risk analysis
(engineering)

THE UNIVERSITY OF LAHORE 63/63

Das könnte Ihnen auch gefallen