Beruflich Dokumente
Kultur Dokumente
Computational finance
1) Mathematical finance
2) Numerical methods
3) Computer simulations
4) Computational intelligence
5) Financial risk
History
Today, all full service institutional finance firms employ computational finance
professionals in their banking and finance operations (as opposed to being
ancillary information technology specialists), while there are many other boutique
firms ranging from 20 or fewer employees to several thousand that specialize in
quantitative trading alone. JPMorgan Chase & Co. was one of the first firms to
create a large derivatives business and employ computational finance (including
through the formation of RiskMetrics), while D. E. Shaw & Co. is probably the
oldest and largest quant fund (Citadel Investment Group is a major rival).
Introduction
Another area where computational finance comes into play is the world of
financial risk management. Stockbrokers, stockholders, and anyone who chooses
to invest in any type of investment can benefit from using the basic principles of
computational finance as a way of managing an individual portfolio. Running the
numbers for individual investors, just alike for larger concerns, can often make it
clear what risks are associated with any given investment opportunity. The result
can often be an individual who is able to sidestep a bad opportunity, and live to
invest another day in something that will be worthwhile in the long run.
In the business world, the use of computational finance can often come into play
when the time to engage in some form of corporate strategic planning arrives. For
instance, reorganizing the operating structure of a company in order to maximize
profits may look very good at first glance, but running the data through a process
of computational finance may in fact uncover some drawbacks to the current plan
that were not readily visible before.
Being aware of the complete and true expenses associated with the restructure
may prove to be more costly than anticipated, and in the long run not as
productive as was originally hoped. Computational finance can help get past the
hype and provide some realistic views of what could happen, before any corporate
strategy is implemented.
Quantitative analysis
Although the original quants were concerned with risk management and
derivatives pricing, the meaning of the term has expanded over time to include
those individuals involved in almost any application of mathematics in finance.
An example is statistical arbitrage.
History
Quantitative finance started in the U.S. in the 1930s as some astute investors
began using mathematical formulae to price stocks and bonds.
Harry Markowitz's 1952 Ph.D thesis "Portfolio Selection" was one of the first
papers to formally adapt mathematical concepts to finance. Markowitz formalized
a notion of mean return and covariances for common stocks which allowed him to
quantify the concept of "diversification" in a market. He showed how to compute
the mean return and variance for a given portfolio and argued that investors
should hold only those portfolios whose variance is minimal among all portfolios
with a given mean return. Although the language of finance now involves Itō
calculus, minimization of risk in a quantifiable manner underlies much of the
modern theory.
In 1969 Robert Merton introduced stochastic calculus into the study of finance.
Merton was motivated by the desire to understand how prices are set in financial
markets, which is the classical economics question of "equilibrium," and in later
papers he used the machinery of stochastic calculus to begin investigation of this
issue.
At the same time as Merton's work and with Merton's assistance, Fischer Black
and Myron Scholes were developing their option pricing formula, which led to
winning the 1997 Nobel Prize in Economics. It provided a solution for a practical
problem, that of finding a fair price for a European call option, i.e., the right to
buy one share of a given stock at a specified price and time. Such options are
frequently purchased by investors as a risk-hedging device. In 1981, Harrison and
Pliska used the general theory of continuous-time stochastic processes to put the
Black-Scholes option pricing formula on a solid theoretical basis, and as a result,
showed how to price numerous other "derivative" securities.
Financial Introduction to the financial markets and the products which are
Products and traded in them: Equities, indices, foreign exchange, fixed
Markets income world and commodities. Options contracts and
strategies for speculation and hedging.
Fixed Income
Fixed-income research groups use the thousands of prewritten math and graphics
functions in MathWorks products to access bond data, perform statistical analysis,
calculate spreads, determine bond and derivative pricing, perform sensitivity
analyses, and run Monte Carlo simulations.
Equity
Smart security investing requires in-depth research and analysis. Measuring all
the influencing factors is an essential part of risk management. As a result,
research groups continually create and modify mathematical models to calculate
stock value, review forecasts, and develop innovative risk strategies.
Equity research groups use the thousands of math and graphics functions in
MathWorks products to access stock data, perform statistical analysis, determine
derivatives pricing, perform sensitivity analyses, and run Monte Carlo
simulations. The graphics capabilities in MATLAB offer a variety of ways to
review time series data, visualize portfolio risks and returns, and create
forecasting graphs.
Investment management and trading research groups use the thousands of math
and graphics functions in MathWorks products to easily access securities data,
According to Fund of Funds analyst Fred Gehm, "There are two types of
quantitative analysis and, therefore, two types of quants. One type works
primarily with mathematical models and the other primarily with statistical
models. While there is no logical reason why one person can't do both kinds of
work, this doesn’t seem to happen, perhaps because these types demand different
skill sets and, much more important, different psychologies."
According to a July 2008 Aite Group report, today quants often use alpha
generation platforms to help them develop financial models. These software
solutions enable quants to centralize and streamline the alpha generation process.
Areas of application
• Investment banking
• Forecasting
• Risk Management software
• Corporate strategic planning
• Securities trading and financial risk management
• Derivatives trading and risk management
• Investment management
• Pension scheme
• Insurance policy
• Mortgage agreement
• Lottery design
• Islamic banking
• Currency peg
• Gold and commodity valuation
• Collateralised debt obligation
• Credit default swap
• Bargaining
• Market mechanism design
Classification of method
1) Mathematical finance
2) Numerical methods
3) Computer simulations
4) Computational intelligence
5) Financial risk
Mathematical finance
The subject has a close relationship with the discipline of financial economics,
which is concerned with much of the underlying theory. Generally, mathematical
finance will derive, and extend, the mathematical or numerical models suggested
by financial economics. Thus, for example, while a financial economist might
study the structural reasons why a company may have a certain share price, a
financial mathematician may take the share price as a given, and attempt to use
stochastic calculus to obtain the fair value of derivatives of the stock (see:
Valuation of options).
In terms of practice, mathematical finance also overlaps heavily with the field of
computational finance (also known as financial engineering). Arguably, these are
largely synonymous, although the latter focuses on application, while the former
focuses on modeling and derivation (see: Quantitative analyst).
History
strategy to understand and quantify the risk (i.e. variance) and return (i.e. mean)
of an entire portfolio of stocks and bonds, an optimization strategy was used to
choose a portfolio with largest mean return subject to acceptable levels of
variance in the return. Simultaneously, William Sharpe developed the
mathematics of determining the correlation between each stock and the market.
For their pioneering work, Markowitz and Sharpe, along with Merton Miller,
shared the 1990 Nobel Prize in economics, for the first time ever awarded for a
work in finance.
INTRODUCTION:
GOALS:
Applied mathematics
Today, the term applied mathematics is used in a broader sense. It includes the
classical areas above, as well as other areas that have become increasingly
important in applications. Even fields such as number theory that are part of pure
mathematics are now important in applications (such as cryptology), though they
are not generally considered to be part of the field of applied mathematics per se.
Sometimes the term applicable mathematics is used to distinguish between the
traditional field of applied mathematics and the many more areas of mathematics
that are applicable to real-world problems.
The success of modern numerical mathematical methods and software has led to
the emergence of computational mathematics, computational science, and
computational engineering, which use high performance computing for the
simulation of phenomena and solution of problems in the sciences and
engineering. These are often considered interdisciplinary programs.
The advent of the computer has created new applications, both in studying and
using the new computer technology itself (computer science, which uses
combinatorics, formal logic, and lattice theory), as well as using computers to
study problems arising in other areas of science (computational science), and of
course studying the mathematics of computation (numerical analysis). Statistics is
probably the most widespread application of mathematics in the social sciences,
but other areas of mathematics are proving increasingly useful in these
disciplines, especially in economics and management science.
Scientific computing
Computer Science
Statistics
Actuarial science
Other disciplines
Mathematical tools
• Asymptotic analysis
• Calculus
• Copulas
• Differential equation
• Ergodic theory
• Gaussian copulas
• Numerical analysis
• Real analysis
• Probability
• Probability distribution
o Binomial distribution
o Log-normal distribution
• Expected value
• Value at risk
• Risk-neutral measure
• Stochastic calculus
o Brownian motion
o Lévy process
• Itô's lemma
• Fourier transform
• Girsanov's theorem
• Radon-Nikodym derivative
• Monte Carlo method
• Quantile function
• Partial differential equations
o Heat equation
• Martingale representation theorem
• Feynman Kac Formula
• Stochastic differential equations
• Volatility
o ARCH model
o GARCH model
• Stochastic volatility
• Mathematical model
• Numerical method
Derivatives pricing
Areas of application
• Computational finance
• Quantitative Behavioral Finance
• Derivative (finance), list of derivatives topics
• Modeling and analysis of financial markets
• International Swaps and Derivatives Association
• Fundamental financial concepts - topics
• Model (economics)
• List of finance topics
• List of economics topics, List of economists
• List of accounting topics
• Statistical Finance
Numerical analysis
One of the earliest mathematical writings is the Babylonian tablet YBC 7289,
which gives a sexagesimal numerical approximation of , the length of the
diagonal in a unit square. Being able to compute the sides of a triangle (and
hence, being able to compute square roots) is extremely important, for instance, in
carpentry and construction. In a rectangular wall section that is 2.40 meter by 3.75
meter, a diagonal beam has to be 4.45 meters long.
Numerical analysis naturally finds applications in all fields of engineering and the
physical sciences, but in the 21st century, the life sciences and even the arts have
adopted elements of scientific computations. Ordinary differential equations
appear in the movement of heavenly bodies (planets, stars and galaxies);
optimization occurs in portfolio management; numerical linear algebra is essential
to quantitative psychology; stochastic differential equations and Markov chains
are essential in simulating living cells for medicine and biology.
General introduction
The overall goal of the field of numerical analysis is the design and analysis of
techniques to give approximate but accurate solutions to hard problems, the
variety of which is suggested by the following.
• Hedge funds (private investment funds) use tools from all fields of
numerical analysis to calculate the value of stocks and derivatives more
precisely than other market participants.
• Airlines use sophisticated optimization algorithms to decide ticket prices,
airplane and crew assignments and fuel needs. This field is also called
operations research.
• Insurance companies use numerical programs for actuarial analysis.
History
To facilitate computations by hand, large books were produced with formulas and
tables of data such as interpolation points and function coefficients. Using these
tables, often calculated out to 16 decimal places or more for some functions, one
could look up values to plug into the formulas given and achieve very good
numerical estimates of some functions. The canonical work in the field is the
NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a
very large number of commonly used formulas and functions and their values at
many points. The function values are no longer very useful when a computer is
available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation.
These calculators evolved into electronic computers in the 1940s, and it was then
found that these computers were also useful for administrative purposes. But the
invention of the computer also influenced the field of numerical analysis, since
now longer and more complicated calculations could be done.
3x3+4=28
Direct Method
3x3 + 4 = 28.
Subtract 4 3x3 = 24.
Divide by 3 x3 = 8.
Take cube roots x = 2.
For the iterative method, apply the bisection method to f(x) = 3x3 - 24. The initial
values are a = 0, b = 3, f(a) = -24, f(b) = 57.
Iterative Method
a b mid f(mid)
0 3 1.5 -13.875
1.5 3 2.25 10.17...
1.5 2.25 1.875 -4.22...
1.875 2.25 2.0625 2.32...
We conclude from this table that the solution is between 1.875 and 2.0625. The
algorithm might return any number in that range with an error less than 0.2.
In a two hour race, we have measured the speed of the car at three instants and
recorded them in the following table.
A discretization would be to say that the speed of the car was constant from 0:00
to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the
total distance traveled in the first 40 minutes is approximately (2/3h x 140
km/h)=93.3 km. This would allow us to estimate the total distance traveled as
93.3 km + 100 km + 120 km = 313.3 km, which is an example of numerical
integration (see below) using a Riemann sum, because displacement is the
integral of velocity.
Ill posed problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and
f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly
1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Iterative methods are more common than direct methods in numerical analysis.
Some methods are direct in principle but are usually used as though they were
not, e.g. GMRES and the conjugate gradient method. For these methods the
number of steps needed to obtain the exact solution is so large that an
approximation is accepted in the same manner as for an iterative method.
Discretization
The study of errors forms an important part of numerical analysis. There are
several ways in which error can be introduced in the solution of the problem.
Round-off
Truncation errors are committed when an iterative method is terminated and the
approximate solution differs from the exact solution. Similarly, discretization
induces a discretization error because the solution of the discrete problem does
not coincide with the solution of the continuous problem. For instance, in the
iteration in the sidebar to compute the solution of 3x3 + 4 = 28, after 10 or so
iterations, we conclude that the root is roughly 1.99 (for example). We therefore
have a truncation error of 0.01.
Once an error is generated, it will generally propagate through the calculation. For
instance, we have already noted that the operation + on a calculator (or a
computer) is inexact. It follows that a calculation of the type a+b+c+d+e is even
more inexact.
Both the original problem and the algorithm used to solve that problem can be
well-conditioned and/or ill-conditioned, and any combination is possible.
... ...
Observe that the Babylonian method converges fast regardless of the initial guess,
whereas Method X converges extremely slowly with initial guess 1.4 and diverges
for initial guess 1.42. Hence, the Babylonian method is numerically stable, while
Method X is numerically unstable.
Areas of study
Optimization: Say you sell lemonade at a lemonade stand, and notice that at $1,
you can sell 197 glasses of lemonade per day, and that for each increase of $0.01,
you will sell one less lemonade per day. If you could charge $1.485, you would
maximize your profit, but due to the constraint of having to charge a whole cent
amount, charging $1.49 per glass will yield the maximum income of $220.52 per
day.
Differential equation: If you set up 100 fans to blow air from one end of the
room to the other and then you drop a feather into the wind, what happens? The
feather will follow the air currents, which may be very complex. One
approximation is to measure the speed at which the air is blowing near the feather
every second, and advance the simulated feather as if it were moving in a straight
line at that same speed for one second, before measuring the wind speed again.
This is called the Euler method for solving an ordinary differential equation.
One of the simplest problems is the evaluation of a function at a given point. The
most straightforward approach, of just plugging in the number in the formula is
sometimes not very efficient. For polynomials, a better approach is using the
Horner scheme, since it reduces the necessary number of multiplications and
additions. Generally, it is important to estimate and control round-off errors
arising from the use of floating point arithmetic.
Interpolation solves the following problem: given the value of some unknown
function at a number of points, what value does that function have at some other
point between the given points? A very simple method is to use linear
interpolation, which assumes that the unknown function is linear between every
pair of successive points. This can be generalized to polynomial interpolation,
which is sometimes more accurate but suffers from Runge's phenomenon. Other
interpolation methods use localized functions like splines or wavelets.
Regression is also similar, but it takes into account that the data is imprecise.
Given some points, and a measurement of the value of some function at these
points (with an error), we want to determine the unknown function. The least
squares-method is one popular way to achieve this.
Areas of application
• Scientific computing
• List of numerical analysis topics
• Gram-Schmidt process
• Numerical differentiation
• Symbolic-numeric computation
General
• numerical-methods.com
• numericalmathematics.com
• Numerical Recipes
• "Alternatives to Numerical Recipes"
• Scientific computing FAQ
Software
Since the late twentieth century, most algorithms are implemented in a variety of
programming languages. The Netlib repository contains various collections of
software routines for numerical problems, mostly in Fortran and C. Commercial
products implementing many different numerical algorithms include the IMSL
and NAG libraries; a free alternative is the GNU Scientific Library.
Many computer algebra systems such as Mathematica also benefit from the
availability of arbitrary precision arithmetic which can provide more accurate
results.
Also, any spreadsheet software can be used to solve simple problems relating to
numerical analysis.
Computational intelligence
Artificial intelligence
The field was founded on the claim that a central property of human beings,
intelligence—the sapience of Homo sapiens—can be so precisely described that it
can be simulated by a machine. This raises philosophical issues about the nature
of the mind and limits of scientific hubris, issues which have been addressed by
myth, fiction and philosophy since antiquity. Artificial intelligence has been the
subject of breathtaking optimism, has suffered stunning setbacks and, today, has
become an essential part of the technology industry, providing the heavy lifting
for many of the most difficult problems in computer science.
AI research is highly technical and specialized, so much so that some critics decry
the "fragmentation" of the field. Subfields of AI are organized around particular
problems, the application of particular tools and around longstanding theoretical
differences of opinion. The central problems of AI include such traits as
reasoning, knowledge, planning, learning, communication, perception and the
ability to move and manipulate objects. General intelligence (or "strong AI") is
still a long-term goal of (some) research.
Perspectives on CI
Thinking machines and artificial beings appear in Greek myths, such as Talos of
Crete, the golden robots of Hephaestus and Pygmalion's Galatea. Human
likenesses believed to have intelligence were built in many ancient societies;
some of the earliest being the sacred statues worshipped in Egypt and Greece, and
including the machines of Yan Shi, Hero of Alexandria, Al-Jazari or Wolfgang
von Kempelen. It was widely believed that artificial beings had been created by
Geber, Judah Loew and Paracelsus. Stories of these creatures and their fates
discuss many of the same hopes, fears and ethical concerns that are presented by
artificial intelligence.
Another issue explored by both science fiction writers and futurists is the impact
of artificial intelligence on society. In fiction, AI has appeared as a servant (R2D2
in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt.
Commander Data in Star Trek), a conqueror (The Matrix), a dictator (With Folded
Hands), an exterminator (Terminator, Battlestar Galactica), an extension to human
abilities (Ghost in the Shell) and the saviour of the human race (R. Daneel Olivaw
in the Foundation Series). Academic sources have considered such consequences
as: a decreased demand for human labor, the enhancement of human ability or
experience, and a need for redefinition of human identity and basic values.
Several futurists argue that artificial intelligence will transcend the limits of
progress and fundamentally transform humanity. Ray Kurzweil has used Moore's
law (which describes the relentless exponential improvement in digital technology
with uncanny accuracy) to calculate that desktop computers will have the same
processing power as human brains by the year 2029, and that by 2045 artificial
intelligence will reach a point where it is able to improve itself at a rate that far
exceeds anything conceivable in the past, a scenario that science fiction writer
Vernor Vinge named the "technological singularity". Edward Fredkin argues that
"artificial intelligence is the next stage in evolution," an idea first proposed by
Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by
George Dyson in his book of the same name in 1998. Several futurists and science
fiction writers have predicted that human beings and machines will merge in the
future into cyborgs that are more capable and powerful than either. This idea,
called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, is
now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick
and inventor Ray Kurzweil. Transhumanism has been illustrated in fiction as well,
for example in the manga Ghost in the Shell and the science fiction series Dune.
Pamela McCorduck writes that these scenarios are expressions of an ancient
human desire to, as she calls it, "forge the gods."
History of CI research
In the middle of the 20th century, a handful of scientists began a new approach to
building intelligent machines, based on recent discoveries in neurology, a new
mathematical theory of information, an understanding of control and stability
called cybernetics, and above all, by the invention of the digital computer, a
machine based on the abstract essence of mathematical reasoning.
These predictions, and many like them, would not come true. They had failed to
recognize the difficulty of some of the problems they faced. In 1974, in response
to the criticism of England's Sir James Lighthill and ongoing pressure from
Congress to fund more productive projects, the U.S. and British governments cut
off all undirected, exploratory research in AI. This was the first AI winter.
In the early 80s, AI research was revived by the commercial success of expert
systems, a form of AI program that simulated the knowledge and analytical skills
of one or more human experts. By 1985 the market for AI had reached more than
a billion dollars, and governments around the world poured money back into the
field. However, just a few years later, beginning with the collapse of the Lisp
Machine market in 1987, AI once again fell into disrepute, and a second, longer
lasting AI winter began.
In the 90s and early 21st century, AI achieved its greatest successes, albeit
somewhat behind the scenes. Artificial intelligence is used for logistics, data
mining, medical diagnosis and many other areas throughout the technology
industry. The success was due to several factors: the incredible power of
computers today (see Moore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields working on
similar problems, and above all a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.
Philosophy of AI
"A physical symbol system has the necessary and sufficient means of general
intelligent action." This statement claims that the essence of intelligence is
symbol manipulation. Hubert Dreyfus argued that, on the contrary, human
expertise depends on unconscious instinct rather than conscious symbol
manipulation and on having a "feel" for the situation rather than explicit
symbolic knowledge.
A formal system (such as a computer program) can not prove all true
statements. Roger Penrose is among those who claim that Gödel's theorem
limits what machines can do.
"The appropriately programmed computer with the right inputs and outputs
would thereby have a mind in exactly the same sense human beings have
minds." Searle counters this assertion with his Chinese room argument, which
asks us to look inside the computer and try to find where the "mind" might be.
The brain can be simulated. Hans Moravec, Ray Kurzweil and others have
argued that it is technologically feasible to copy the brain directly into
hardware and software, and that such a simulation will be essentially identical
to the original.
CI research
In the 21st century, AI research has become highly specialized and technical. It is
deeply divided into subfields that often fail to communicate with each other.
Subfields have grown up around particular institutions, the work of particular
researchers, particular problems (listed below), long standing differences of
opinion about how AI should be done (listed as "approaches" below) and the
application of widely differing tools (see tools of AI, below).
Problems of AI
The problem of simulating (or creating) intelligence has been broken down into a
number of specific sub-problems. These consist of particular traits or capabilities
that researchers would like an intelligent system to display. The traits described
below have received the most attention.
Human beings solve most of their problems using fast, intuitive judgments rather
than the conscious, step-by-step deduction that early AI research was able to
model. AI has made some progress at imitating this kind of "sub-symbolic"
problem solving: embodied approaches emphasize the importance of sensorimotor
skills to higher reasoning; neural net research attempts to simulate the structures
inside human and animal brains that gives rise to this skill.
Knowledge representation
Many of the things people know take the form of "working assumptions." For
example, if a bird comes up in conversation, people typically picture an
animal that is fist sized, sings, and flies. None of these things are true about all
birds. John McCarthy identified this problem in 1969 as the qualification
problem: for any commonsense rule that AI researchers care to represent,
there tend to be a huge number of exceptions. Almost nothing is simply true or
false in the way that abstract logic requires. AI research has explored a
number of solutions to this problem.
The number of atomic facts that the average person knows is astronomical.
Research projects that attempt to build a complete knowledge base of
commonsense knowledge (e.g., Cyc) require enormous amounts of laborious
ontological engineering — they must be built, by hand, one complicated
concept at a time.
Much of what people know isn't represented as "facts" or "statements" that they
could actually say out loud. For example, a chess master will avoid a particular
chess position because it "feels too exposed" or an art critic can take one look at a
statue and instantly realize that it is a fake. These are intuitions or tendencies that
are represented in the brain non-consciously and sub-symbolically. Knowledge
like this informs, supports and provides a context for symbolic, conscious
knowledge. As with the related problem of sub-symbolic reasoning, it is hoped
that situated AI or computational intelligence will provide ways to represent this
kind of knowledge.
Planning
Intelligent agents must be able to set goals and achieve them. They need a way to
visualize the future (they must have a representation of the state of the world and
be able to make predictions about how their actions will change it) and be able to
make choices that maximize the utility (or "value") of the available choices.
In some planning problems, the agent can assume that it is the only thing acting
on the world and it can be certain what the consequences of its actions may be.
However, if this is not true, it must periodically check if the world matches its
predictions and it must change its plan as this becomes necessary, requiring the
agent to reason under uncertainty.
Learning
Natural language processing gives machines the ability to read and understand the
languages that the human beings speak. Many researchers hope that a sufficiently
powerful natural language processing system would be able to acquire knowledge
on its own, by reading the existing text available over the internet. Some
straightforward applications of natural language processing include information
retrieval (or text mining) and machine translation.
The field of robotics is closely related to AI. Intelligence is required for robots to
be able to handle such tasks as object manipulation and navigation, with sub-
problems of localization (knowing where you are), mapping (learning what is
around you) and motion planning (figuring out how to get there).
Perception
Machine perception is the ability to use input from sensors (such as cameras,
microphones, sonar and others more exotic) to deduce aspects of the world.
Computer vision is the ability to analyze visual input. A few selected subproblems
are speech recognition, facial recognition and object recognition.
Social intelligence
Emotion and social skills play two roles for an intelligent agent:
Creativity
General intelligence
Most researchers hope that their work will eventually be incorporated into a
machine with general intelligence (known as strong AI), combining all the skills
above and exceeding human abilities at most or all of them. A few believe that
anthropomorphic features like artificial consciousness or an artificial brain may be
required for such a project.
Many of the problems above are considered AI-complete: to solve one problem,
you must solve them all. For example, even a straightforward, specific task like
machine translation requires that the machine follow the author's argument
(reason), know what it's talking about (knowledge), and faithfully reproduce the
author's intention (social intelligence). Machine translation, therefore, is believed
to be AI-complete: it may require strong AI to be done as well as humans can do
it.
Approaches to CI
In the 1940s and 1950s, a number of researchers explored the connection between
neurology, information theory, and cybernetics. Some of them built machines that
used electronic networks to exhibit rudimentary intelligence, such as W. Grey
Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered
for meetings of the Teleological Society at Princeton University and the Ratio
Club in England. By 1960, this approach was largely abandoned, although
elements of it would be revived in the 1980s.
Traditional symbolic AI
Cognitive simulation
Economist Herbert Simon and Alan Newell studied human problem solving
skills and attempted to formalize them, and their work laid the foundations of
Logical AI
Unlike Newell and Simon, John McCarthy felt that machines did not need to
simulate human thought, but should instead try to find the essence of abstract
reasoning and problem solving, regardless of whether people used the same
algorithms. His laboratory at Stanford (SAIL) focused on using formal logic
to solve a wide variety of problems, including knowledge representation,
planning and learning. Logic was also focus of the work at the University of
Edinburgh and elsewhere in Europe which led to the development of the
programming language Prolog and the science of logic programming.
"Scruffy" symbolic AI
Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that
solving difficult problems in vision and natural language processing required
ad-hoc solutions – they argued that there was no simple and general principle
(like logic) that would capture all the aspects of intelligent behavior. Roger
Schank described their "anti-logic" approaches as "scruffy" (as opposed to the
"neat" paradigms at CMU and Stanford). Commonsense knowledge bases
(such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be
built by hand, one complicated concept at a time.
Knowledge based AI
Sub-symbolic AI
During the 1960s, symbolic approaches had achieved great success at simulating
high-level thinking in small demonstration programs. Approaches based on
Computational Intelligence
Statistical AI
Tools of CI research
Simple exhaustive searches are rarely sufficient for most real world problems: the
search space (the number of places to search) quickly grows to astronomical
numbers. The result is a search that is too slow or never completes. The solution,
for many problems, is to use "heuristics" or "rules of thumb" that eliminate
choices that are unlikely to lead to the goal (called "pruning the search tree").
Heuristics supply the program with a "best guess" for what path the solution lies
on.
A very different kind of search came to prominence in the 1990s, based on the
mathematical theory of optimization. For many problems, it is possible to begin
the search with some form of a guess and then refine the guess incrementally until
Logic
Logic was introduced into AI research by John McCarthy in his 1958 Advice
Taker proposal. In 1963, J. Alan Robinson discovered a simple, complete and
entirely algorithmic method for logical deduction which can easily be performed
by digital computers. However, a naive implementation of the algorithm quickly
leads to a combinatorial explosion or an infinite loop. In 1974, Robert Kowalski
suggested representing logical expressions as Horn clauses (statements in the
form of rules: "if p then q"), which reduced logical deduction to backward
chaining or forward chaining. This greatly alleviated (but did not eliminate) the
problem.
Logic is used for knowledge representation and problem solving, but it can be
applied to other problems as well. For example, the satplan algorithm uses logic
for planning, and inductive logic programming is a method for learning. There are
several different forms of logic used in AI research.
calculus and fluent calculus (for representing events and time); causal
calculus; belief calculus; and modal logics.
Bayesian networks are a very general tool that can be used for a large number of
problems: reasoning (using the Bayesian inference algorithm), learning (using the
expectation-maximization algorithm), planning (using decision networks) and
perception (using dynamic Bayesian networks).
Probabilistic algorithms can also be used for filtering, prediction, smoothing and
finding explanations for streams of data, helping perception systems to analyze
processes that occur over time (e.g., hidden Markov models or Kalman filters).
The simplest AI applications can be divided into two types: classifiers ("if shiny
then diamond") and controllers ("if shiny then pick up"). Controllers do however
also classify conditions before inferring actions, and therefore classification forms
a central part of many AI systems.
Classifiers are functions that use pattern matching to determine a closest match.
They can be tuned according to examples, making them very attractive for use in
AI. These examples are known as observations or patterns. In supervised learning,
each pattern belongs to a certain predefined class. A class can be seen as a
decision that has to be made. All the observations combined with their class labels
are known as a data set.
A wide range of classifiers are available, each with its strengths and weaknesses.
Classifier performance depends greatly on the characteristics of the data to be
classified. There is no single classifier that works best on all given problems; this
is also referred to as the "no free lunch" theorem. Various empirical tests have
been performed to compare classifier performance and to find the characteristics
of data that determine classifier performance. Determining a suitable classifier for
a given problem is however still more an art than science.
The most widely used classifiers are the neural network, kernel methods such as
the support vector machine, k-nearest neighbor algorithm, Gaussian mixture
model, naive Bayes classifier, and decision tree. The performance of these
classifiers have been compared over a wide range of classification tasks in order
to find data characteristics that determine classifier performance.
Neural networks
The study of artificial neural networks began in the decade before the field AI
research was founded. In the 1960s Frank Rosenblatt developed an important
early version, the perceptron. Paul Werbos developed the backpropagation
algorithm for multilayer perceptrons in 1974, which led to a renaissance in neural
network research and connectionism in general in the middle 1980s. The Hopfield
net, a form of attractor network, was first described by John Hopfield in 1982.
Control theory
Specialized languages
• IPL was the first language developed for artificial intelligence. It includes
features intended to support programs that could perform general problem
solving, including lists, associations, schemas (frames), dynamic memory
AI applications are also often written in standard languages like C++ and
languages designed for mathematics, such as MATLAB and Lush.
How can one determine if an agent is intelligent? In 1950, Alan Turing proposed a
general procedure to test the intelligence of an agent now known as the Turing
test. This procedure allows almost all the major problems of artificial intelligence
to be tested. However, it is a very difficult challenge and at present all agents fail.
Areas of application
• Simulated annealing
• Machine learning
• Artificial immune systems
• Expert systems
• Hybrid intelligent systems
• Hybrid logic
• Simulated reality
• Soft computing
• Bayesian networks
• Chaos theory
• Ant colony optimization
• Particle swarm optimisation
• Cognitive robotics
• Developmental robotics
• Evolutionary robotics
• Intelligent agents
• Knowledge-Based Engineering
• Type-2 fuzzy sets and systems
Software
Organizations
Computer simulation
Computer simulations vary from computer programs that run a few minutes, to
network-based groups of computers running for hours, to ongoing simulations
that run for days. The scale of events being simulated by computer simulations
has far exceeded anything possible (or perhaps even imaginable) using the
traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-
battle simulation, of one force invading another, involved the modeling of 66,239
tanks, trucks and other vehicles on simulated terrain around Kuwait, using
multiple supercomputers in the DoD High Performance Computer Modernization
Program; a 1-billion-atom model of material deformation (2002); a 2.64-million-
atom model of the complex maker of protein in all organisms, a ribosome, in
2005; and the Blue Brain project at EPFL (Switzerland), began in May 2005, to
create the first computer simulation of the entire human brain, right down to the
molecular level.
History
Computer simulation was developed hand-in-hand with the rapid growth of the
computer, following its first large-scale deployment during the Manhattan Project
in World War II to model the process of nuclear detonation. It was a simulation of
12 hard spheres using a Monte Carlo algorithm. Computer simulation is often
used as an adjunct to, or substitution for, modeling systems for which simple
closed form analytic solutions are not possible. There are many different types of
computer simulation; the common feature they all share is the attempt to generate
a sample of representative scenarios for a model in which a complete enumeration
of all possible states of the model would be prohibitive or impossible. Computer
models were initially used as a supplement for other arguments, but their use later
became rather widespread.
Data preparation
The data input/output for the simulation can be either through formatted textfiles
or a pre- and postprocessor.
Types
For example:
Formerly, the output data from a computer simulation was sometimes presented in
a table, or a matrix, showing how data was affected by numerous changes in the
simulation parameters. The use of the matrix format was related to traditional use
of the matrix concept in mathematical models; however, psychologists and
others noted that humans could quickly perceive trends by looking at graphs or
even moving-images or motion-pictures generated from the data, as displayed by
computer-generated-imagery (CGI) animation. Although observers couldn't
necessarily read out numbers, or spout math formulas, from observing a moving
weather chart, they might be able to predict events (and "see that rain was headed
their way"), much faster than scanning tables of rain-cloud coordinates. Such
intense graphical displays, which transcended the world of numbers and formulae,
sometimes also led to output that lacked a coordinate grid or omitted timestamps,
as if straying too far from numeric data displays. Today, weather forecasting
models tend to balance the view of moving rain/snow clouds against a map that
uses numeric coordinates and numeric timestamps of events.
Similarly, CGI computer simulations of CAT scans can simulate how a tumor
might shrink or change, during an extended period of medical treatment,
presenting the passage of time as a spinning view of the visible human head, as
the tumor changes.
Note that the term computer simulation is broader than computer modeling, which
implies that all aspects are being modeled in the computer representation.
However, computer simulation also includes generating inputs from simulated
users to run actual computer software or equipment, with only part of the system
being modeled: an example would be flight simulators which can run machines as
well as actual flight software.
Computer simulations are used in a wide variety of practical contexts, such as:
The reliability and the trust people put in computer simulations depends on the
validity of the simulation model, therefore verification and validation are of
crucial importance in the development of computer simulations. Another
important aspect of computer simulations is that of reproducibility of the results,
meaning that a simulation model should not provide a different answer for each
execution. Although this might seem obvious, this is a special point of attention in
stochastic simulations, where random numbers should actually be semi-random
numbers. An exception to reproducibility are human in the loop simulations such
as flight simulations and computer games. Here a human is part of the simulation
and thus influences the outcome in a way that is hard if not impossible to
reproduce exactly.
Pitfalls
Areas of application
• ACT-R
• Articulatory synthesis
• Artificial life
• CAVE
• Computer-aided design
• Computer simulation and organizational studies
• Dry Lab
• Earth Simulator
• Emulator
• Experiment in silico
• Global climate model
• Ice sheet model
• List of computer simulation software
• Mathematical model
• MapleSim
• Molecular dynamics
• SimApp
• Simcyp Simulator
• Simulated reality
• Social simulation
• Solver (computer science)
• Web based simulation
Organizations
Education
Examples
Financial Risk
For the most part, these methodologies consist of the following elements,
performed, more or less, in the following order.
The strategies to manage risk include transferring the risk to another party,
avoiding the risk, reducing the negative effect of the risk, and accepting some or
all of the consequences of a particular risk.
Introduction
Intangible risk management identifies a new type of a risk that has a 100%
probability of occurring but is ignored by the organization due to a lack of
identification ability. For example, when deficient knowledge is applied to a
Risk management also faces difficulties allocating resources. This is the idea of
opportunity cost. Resources spent on risk management could have been spent on
more profitable activities. Again, ideal risk management minimizes spending
while maximizing the reduction of the negative effects of risks.
Process
Identification
After establishing the context, the next step in the process of managing risk is to
identify potential risks. Risks are about events that, when triggered, cause
problems. Hence, risk identification can start with the source of problems, or with
the problem itself.
• Problem analysis Risks are related to identified threats. For example: the
threat of losing money, the threat of abuse of privacy information or the
threat of accidents and casualties. The threats may exist with various
entities, most important with shareholders, customers and legislative
bodies such as the government.
When either source or problem is known, the events that a source may trigger or
the events that can lead to a problem can be investigated. For example:
stakeholders withdrawing during a project may endanger funding of the project;
privacy information may be stolen by employees even within a closed network;
lightning striking a Boeing 747 during takeoff may make all people onboard
immediate casualties.
The chosen method of identifying risks may depend on culture, industry practice
and compliance. The identification methods are formed by templates or the
development of templates for identifying source, problem or event. Common risk
identification methods are:
Assessment
Once risks have been identified, they must then be assessed as to their potential
severity of loss and to the probability of occurrence. These quantities can be either
simple to measure, in the case of the value of a lost building, or impossible to
know for sure in the case of the probability of an unlikely event occurring.
Therefore, in the assessment process it is critical to make the best educated
guesses possible in order to properly prioritize the implementation of the risk
management plan.
Later research has shown that the financial benefits of risk management are less
dependent on the formula used but are more dependent on the frequency and how
risk assessment is performed.
Once risks have been identified and assessed, all techniques to manage the risk
fall into one or more of these four major categories:
• Avoidance (eliminate)
• Reduction (mitigate)
• Transfer (outsource or insure)
• Retention (accept and budget)
Ideal use of these strategies may not be possible. Some of them may involve
trade-offs that are not acceptable to the organization or person making the risk
management decisions. Another source, from the US Department of Defense,
Defense Acquisition University, calls these categories ACAT, for Avoid, Control,
Accept, or Transfer. This use of the ACAT acronym is reminiscent of another
ACAT (for Acquisition Category) used in US Defense industry procurements, in
which Risk Management figures prominently in decision making and planning.
Risk avoidance
Includes not performing an activity that could carry risk. An example would be
not buying a property or business in order to not take on the liability that comes
with it. Another would be not flying in order to not take the risk that the airplane
were to be hijacked. Avoidance may seem the answer to all risks, but avoiding
risks also means losing out on the potential gain that accepting (retaining) the risk
may have allowed. Not entering a business to avoid the risk of loss also avoids the
possibility of earning profits.
Risk reduction
Involves methods that reduce the severity of the loss or the likelihood of the loss
from occurring. For example, sprinklers are designed to put out a fire to reduce
the risk of loss by fire. This method may cause a greater loss by water damage
and therefore may not be suitable. Halon fire suppression systems may mitigate
that risk, but the cost may be prohibitive as a strategy. Risk management may also
take the form of a set policy, such as only allow the use of secured IM platforms
(like Brosix) and not allowing personal IM platforms (like AIM) to be used in
order to reduce the risk of data leaks.
Risk retention
Involves accepting the loss when it occurs. True self insurance falls in this
category. Risk retention is a viable strategy for small risks where the cost of
insuring against the risk would be greater over time than the total losses sustained.
All risks that are not avoided or transferred are retained by default. This includes
risks that are so large or catastrophic that they either cannot be insured against or
the premiums would be infeasible. War is an example since most property and
risks are not insured against war, so the loss attributed by war is retained by the
insured. Also any amounts of potential loss (risk) over the amount insured is
retained risk. This may also be acceptable if the chance of a very large loss is
small or if the cost to insure for greater coverage amounts is so great it would
hinder the goals of the organization too much.
Risk transfer
Some ways of managing risk fall into multiple categories. Risk retention pools are
technically retaining the risk for the group, but spreading it over the whole group
involves transfer among individual members of the group. This is different from
traditional insurance, in that no premium is exchanged between members of the
group up front, but instead losses are assessed to all members of the group.
The risk management plan should propose applicable and effective security
controls for managing the risks. For example, an observed high risk of computer
viruses could be mitigated by acquiring and implementing antivirus software. A
good risk management plan should contain a schedule for control implementation
and responsible persons for those actions.
According to ISO/IEC 27001, the stage immediately after completion of the Risk
Assessment phase consists of preparing a Risk Treatment Plan, which should
document the decisions about how each of the identified risks should be handled.
Mitigation of risks often means selection of security controls, which should be
documented in a Statement of Applicability, which identifies which particular
control objectives and controls from the standard have been selected, and why.
Implementation
Follow all of the planned methods for mitigating the effect of the risks. Purchase
insurance policies for the risks that have been decided to be transferred to an
insurer, avoid all risks that can be avoided without sacrificing the entity's goals,
reduce others, and retain the rest.
Initial risk management plans will never be perfect. Practice, experience, and
actual loss results will necessitate changes in the plan and contribute information
to allow possible different decisions to be made in dealing with the risks being
faced.
Limitations
If risks are improperly assessed and prioritized, time can be wasted in dealing
with risk of losses that are not likely to occur. Spending too much time assessing
and managing unlikely risks can divert resources that could be used more
profitably. Unlikely events do occur but if the risk is unlikely enough to occur it
may be better to simply retain the risk and deal with the result if the loss does in
fact occur. Qualitative risk assessment is subjective and lack consistency. The
primary justification for a formal risk assessment process is legal and
bureaucratic.
Prioritizing too highly the risk management processes could keep an organization
from ever completing a project or even getting started. This is especially true if
other work is suspended until the risk management process is considered
complete.
It is also important to keep in mind the distinction between risk and uncertainty.
Risk can be measured by impacts x probability.
The Basel II framework breaks risks into market risk (price risk), credit risk and
operational risk and also specifies methods for calculating capital requirements
for each of these components.
In the more general case, every probable risk can have a pre-formulated plan to
deal with its possible consequences (to ensure contingency if the risk becomes a
liability).
From the information above and the average cost per employee over time, or cost
accrual ratio, a project manager can estimate:
• Planning how risk will be managed in the particular project. Plan should
include risk management tasks, responsibilities, activities and budget.
• Assigning a risk officer - a team member other than a project manager
who is responsible for foreseeing potential project problems. Typical
characteristic of risk officer is a healthy skepticism.
• Maintaining live project risk database. Each risk should have the
following attributes: opening date, title, short description, probability and
importance. Optionally a risk may have an assigned person responsible for
its resolution and a date by which the risk must be resolved.
• Creating anonymous risk reporting channel. Each team member should
have possibility to report risk that he foresees in the project.
• Preparing mitigation plans for risks that are chosen to be mitigated. The
purpose of the mitigation plan is to describe how this particular risk will
be handled – what, when, by who and how will it be done to avoid it or
minimize consequences if it becomes a liability.
• Summarizing planned and faced risks, effectiveness of mitigation
activities, and effort spent for the risk management.
Risk Communication
Risk communication refers to the idea that people are uncomfortable talking about
risk. People tend to put off admitting that risk is involved, as well as
communicating about risks and crises. Risk Communication can also be linked to
Crisis communication.
Areas of application
• Risk analysis
(engineering)