Beruflich Dokumente
Kultur Dokumente
Numerical analysis:
Numerical
analysis is
the
study
of algorithms that
use
numerical approximation (as opposed to general symbolic manipulations) for
the problems of mathematical analysis (as distinguished from discrete
mathematics).
One of the earliest mathematical writings is a Babylonian tablet from
the Yale Babylonian Collection (YBC 7289), which gives a hexadecimal
numerical approximation of
, the length of the diagonal in a unit square.
Being able to compute the sides of a triangle (and hence, being able to
compute square roots) is extremely important, for instance,
in astronomy, carpentry and construction.
Numerical analysis continues this long tradition of practical
mathematical calculations. Much like the Babylonian approximation of
,
modern numerical analysis does not seek exact answers, because exact
answers are often impossible to obtain in practice. Instead, much of
numerical analysis is concerned with obtaining approximate solutions while
maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of
engineering and the physical sciences, but in the 21st century also the life
sciences and even the arts have adopted elements of scientific
computations. Ordinary
differential
equations appear
in celestial
mechanics (planets, stars and galaxies); numerical linear algebra is important
for data analysis; stochastic differential equations and Markov chains are
essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often
depended on hand interpolation in large printed tables. Since the mid 20th
century, computers calculate the required functions instead. These same
interpolation formulas nevertheless continue to be used as part of the
software algorithms for solving differential equations.
General Introduction:
The overall goal of the field of numerical analysis is the design and analysis
of techniques to give approximate but accurate solutions to hard problems,
the variety of which is suggested by the following:
Advanced numerical methods are essential in making numerical
weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate
numerical solution of a system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using
computer simulations of car crashes. Such simulations essentially
consist of solving partial differential equations numerically.
Hedge funds (private investment funds) use tools from all fields of
numerical analysis to attempt to calculate the value of stocks and
derivatives more precisely than other market participants.
Airlines use sophisticated optimization algorithms to decide ticket
prices, airplane and crew assignments and fuel needs. Historically, such
algorithms were developed within the overlapping field of operations
research.
Insurance companies use numerical programs for actuarial analysis.
The rest of this section outlines several important themes of numerical
analysis.
History:
The field of numerical analysis predates the invention of modern
computers by many centuries. Linear interpolation was already in use more
than 2000 years ago. Many great mathematicians of the past were
preoccupied by numerical analysis, as is obvious from the names of
important algorithms like Newton's method, Lagrange interpolation
polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with
formulas and tables of data such as interpolation points and function
Discretization:
Chapter two
Linear and nonlinear equations
In mathematics, a linear function (or map)
both of the following properties:
Additivity (Superposition),
Homogeneity,
is a linear map
The definition
is very general in that can be any sensible
mathematical object (number, vector, function, etc.), and the function
can literally be any mapping, including integration or differentiation with
associated
constraints
(such
as boundary
values).
If
contains differentiation with respect to , the result will be a differential
equation.
variables in such a way that one can influence, or forecast, the output of
another. In basic mathematics, linear equations are the most popular choice of
analysis, but nonlinear equations dominate the realm of higher math and
science.
Types of Equations:
Each equation gets its form based on the highest degree, or exponent,
of the variable. For instance, in the case where y = x -- 6x + 2, the degree of
3 gives this equation the name "cubic." Any equation that has a degree no
higher than 1 receives the name "linear." Otherwise, we call an equation
"nonlinear," whether it is quadratic, a sine-curve or in any other form.
Input-Output Relationships:
In general, "x" is considered to be the input of an equation and "y" is
considered to be the output. In the case of a linear equation, any increase in
"x" will either cause an increase in "y" or a decrease in "y" corresponding to
the value of the slope. In contrast, in a nonlinear equation, "x" may not
always cause "y" to increase. For example, if y = (5 -- x), "y" decreases in
value as "x" approaches 5, but increases otherwise.
Graph Differences:
A graph displays the set of solutions for a given equation. In the case of
linear equations, the graph will always be a line. In contrast, a nonlinear
equation may look like a parabola if it is of degree 2, a curvy x-shape if it is
of degree 3, or any curvy variation thereof. While linear equations are always
straight, nonlinear equations often feature curves.
Exceptions:
Except for the case of vertical lines (x = a constant) and horizontal lines
(y = a constant), linear equations will exist for all values of "x" and "y."
Nonlinear equations, on the other hand, may not have solutions for certain
values of "x" or "y." For instance, if y = sqrt(x), then "x" exists only from 0
and beyond, as does "y," because the square root of a negative number does
not exist in the real number system and there are no square roots that result in
a negative output.
Benefits:
Linear relationships can be best explained by linear equations, where
the increase in one variable directly causes the increase or decrease of
another. For example, the number of cookies you eat in a day could have a
direct impact on your weight as illustrated by a linear equation. However, if
you were analyzing the division of cells under mitosis, a nonlinear,
exponential equation would fit the data better.
derivation shows that all solutions must be of the form above. Notice that if
the constant of integration for is chosen to be different from 0, then the
constant cancels itself from the negative exponent outside the integral and the
positive exponent inside. This proves that the answers to both of the key
questions are affirmative for first order linear differential equations.
For example: to determine where the differential equation
(cos x)y' + (sin x)y = x2 y(0) = 4
has a unique solution. We can say:
Dividing by cos x to get it into standard form gives
y' + (tan x)y = x2 sec x
Since tan x is continuous for
-/2 < x < /2
and this interval contains 0, the differential equation is guaranteed to have a
unique solution on this interval
Now consider the differential equation :
dy/dx = x/y
y(5) = -3
Notice that the theorem does not apply, since the differential equation is
nonlinear. We can separate and solve.
ydy = xdx
y2 = x2 + C
Now plugging in the initial value, we get
9 = 25 + C
C = -16
2
y = x2 - 16
Taking the square root of both sides, we would get a plus or minus solution,
however the initial condition states that the value must be negative so the
solution is
y = -(x2 - 16)1/2
This equation is only valid for |x| > 4
Notice that the original equation is not continuous at y = 0, but the interval
where the solution is valid could not have been guessed without solving the
differential equation.
Note: This method is useful when 2 variables are on one side of the equation,
and the constant is on the other side.
1. Stack the equations to add.
2. Multiply the top equation by -3.
-3(x + y = 180)
3. Why multiply by -3? Add to see.
-3x + -3y = -540
+ 3x + 2y = 414
0 + -1y = -126
Notice that x is eliminated.
4. Solve for y:
y = 126
5. Plug in y = 126 to find x.
x + y = 180
x + 126 = 180
x = 54
6. Verify that (54, 126) is the correct answer.
3x + 2y = 414
3(54) + 2(126) = 414
414 = 414
Solving the inear equations by Addition:
The addition method of solving systems of equations is also called the
method of elimination. This method is similar to the method you probably
learned for solving simple equations.
If you had the equation "x + 6 = 11", you would write "6" under either side
of the equation, and then you'd "add down" to get "x = 5" as the solution.
x + 6 = 11
6 6
x
= 5
You'll do something similar with the addition method.
Solve the following system using addition.
2x + y = 9
3x y = 16
Note that, if I add down, the y's will cancel out. So I'll draw an "equals"
bar under the system, and add down:
2x + y = 9
3x y = 16
5x
= 25
Now I can divide through to solve for x = 5, and then back-solve, using
either of the original equations, to find the value of y. The first equation
has smaller numbers, so I'll back-solve in that one:
2(5) + y = 9
10 + y = 9
y = 1
Then the solution is (x, y) = (5, 1).
It doesn't matter which equation you use for the backsolving; you'll get
the same answer either way. If I'd used the second equation, I'd have gotten:
3(5) y = 16
15 y = 16
y = 1
y = 1
...which is the same result as before.
Solve the following system using addition.
x 2y = 9
x + 3y = 16
Note that the x-terms would cancel out if only they'd had opposite
signs. I can create this cancellation by multiplying either one of the
equations by 1, and then adding down as usual. It doesn't matter which
equation I choose, as long as I am careful to multiply the 1 through the
entire equation. (That means both sides of the "equals" sign!)
I'll multiply the second equation.
y + 12 = 3
y = -9
4. Verify that (-1, -9) is the correct solution.
(-9) - 5(-1) = -4
-9 + 5 = -4
In mathematics, a system of linear equations (or linear system) is a
collection of linear equations involving the same set of variables.[1] For
example,
since it makes all three equations valid. The word "system" indicates that the
equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is the basis and a fundamental
part of linear algebra, a subject which is used in most parts of modern
mathematics. Computational algorithms for finding the solutions are an
important part of numerical linear algebra, and play a prominent role
in engineering, physics, chemistry, computer
science,
and economics.
A system of non-linear equations can often be approximated by a linear
system (see linearization), a helpful technique when making a mathematical
model orcomputer simulation of a relatively complex system.
Very often, the coefficients of the equations are real or complex numbers and
the solutions are searched in the same set of numbers, but the theory and the
algorithms apply for coefficients and solutions in any field. For solutions in
an integral domain like the ring of the integers, or in other algebraic
structures, other theories have been developed, see Linear equation over a
ring. Integer linear programming is a collection of methods for finding the
"best" integer solution (when there are many). Grbner basis theory provides
algorithms when coefficients and unknowns are polynomials. Also tropical
geometry is an example of linear algebra in a more exotic structure.
Elementary example.
The simplest kind of linear system involves two equations and two variables:
One method for solving such a system is as follows. First, solve the top
equation for in terms of :
Now substitute this expression for x into the bottom equation:
Here
the system, and
Vector equation:
One extremely helpful view is that each unknown is a weight for
a column vector in a linear combination.
This allows all the language and theory of vector spaces (or more
generally, modules) to be brought to bear. For example, the collection of all
possible linear combinations of the vectors on the left-hand side is called
their span, and the equations have a solution just when the right-hand vector
is within that span. If every vector within that span has exactly one
expression as a linear combination of the given left-hand vectors, then any
solution is unique. In any event, the span has a basis of linearly
independent vectors that do guarantee exactly one expression; and the
number of vectors in that basis (its dimension) cannot be larger than m or n,
but it can be smaller. This is important because if we have m independent
vectors a solution is guaranteed regardless of the right-hand side, and
otherwise not guaranteed.
Matrix equation:
The vector equation is equivalent to a matrix equation of the form
where A is an mn matrix, x is a column vector with n entries, and b is a
column vector with m entries.
Geometric interpretation:
For a system involving two variables (x and y), each linear equation
determines a line on the xy-plane. Because a solution to a linear system must
satisfy all of the equations, the solution set is the intersection of these lines,
and is hence either a line, a single point, or the empty set.
For three variables, each linear equation determines a plane in threedimensional space, and the solution set is the intersection of these planes.
Thus the solution set may be a plane, a line, a single point, or the empty set.
For example, as three parallel planes do not have a common point, the
solution set of their equations is empty; the solution set of the equations of
three planes intersecting at a point is single point; if three planes pass through
two points, their equations have at least two common solutions; in fact the
solution set is infinite and consists in all the line passing through these points
For n variables, each linear equation determines a hyperplane in ndimensional space. The solution set is the intersection of these hyperplanes,
which may be a flat of any dimension.
General behavior:
In general, the behavior of a linear system is determined by the
relationship between the number of equations and the number of unknowns:
Usually, a system with fewer equations than unknowns has infinitely
many solutions, but it may have no solution. Such a system is known as
an undetermined.
Usually, a system with the same number of equations and unknowns
has a single unique solution.
Usually, a system with more equations than unknowns has no solution.
Such a system is also known as an over determined system.
In the first case, the dimension of the solution set is usually equal
to n m, where n is the number of variables and m is the number of
equations.
The following pictures illustrate this trichotomy in the case of two variables:
One equation
The first system has infinitely many solutions, namely all of the points on
the blue line. The second system has a single unique solution, namely the
intersection of the two lines. The third system has no solutions, since the
three lines share no common point.
Keep in mind that the pictures above show only the most common case. It
is possible for a system of two equations and two unknowns to have no
solution (if the two lines are parallel), or for a system of three equations and
two unknowns to be solvable (if the three lines intersect at a single point). In
general, a system of linear equations may behave differently from expected if
the equations are linearly dependent, or if two or more of the equations
are inconsistent.
Properties:
Independence:
The equations of a linear system are independent if none of the
equations can be derived algebraically from the others. When the equations
are independent, each equation contains new information about the variables,
and removing any of the equations increases the size of the solution set. For
linear equations, logical independence is the same as linear independence.
are not independent they are the same equation when scaled by a factor of
two, and they would produce identical graphs. This is an example of
equivalence in a system of linear equations.
For a more complicated example, the equations
are not independent, because the third equation is the sum of the other two.
Indeed, any one of these equations can be derived from the other two, and
any one of the equations can be removed without affecting the solution set.
The graphs of these equations are three lines that intersect at a single point.
Consistency:
Equivalence:
Two linear systems using the same set of variables are equivalent if
each of the equations in the second system can be derived algebraically from
the equations in the first system, and vice-versa. Two systems are equivalent
if either both are inconsistent or each equation of any of them is a linear
combination of the equations of the other one. It follows that two linear
systems are equivalent if and only if they have the same solution set.
There are several algorithms for solving a system of linear equations.
Describing the solution:
When the solution set is finite, it is reduced to a single element. In this
case, the unique solution is described by a sequence of equations whose left
hand sides are the names of the unknowns and right hand sides are the
corresponding values, for example
. When an order
on the unknowns has been fixed, for example the alphabetical order the
solution may be described as a vector of values, like
for the
previous example.
It can be difficult to describe a set with infinite solutions. Typically,
some of the variables are designated as free (or independent, or
as parameters), meaning that they are allowed to take any value, while the
remaining variables are dependent on the values of the free variables.
For example, consider the following system:
The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z. Any point
in the solution set can be obtained by first choosing a value for z, and then
computing the corresponding values for xand y.
Each free variable gives the solution space one degree of freedom, the
number of which is equal to the dimension of the solution set. For example,
the solution set for the above equation is a line, since a point in the solution
set can be chosen by specifying the value of the parameter z. An infinite
solution of higher order may describe a plane, or higher-dimensional set.
Solving the first equation for x gives x = 5 + 2z 3y, and plugging this into
the second and third equation yields
Solving the first of these equations for y yields y = 2 + 3z, and plugging this
into the second equation yields z = 2. We now have:
Row reduction:
Main article: Gaussian elimination
In row reduction, the linear system is represented as an augmented matrix:
The last matrix is in reduced row echelon form, and represents the
system x = 15, y = 8, z = 2. A comparison with the example in the previous
section on the algebraic elimination of variables shows that these two
methods are in fact the same; the difference lies in how the computations are
written down.
Cramer's rule:
is given by
, as
matrices. Special methods exist also for matrices with many zero elements
(so-called sparse matrices), which appear often in applications.
A completely different approach is often taken for very large systems,
which would otherwise take too much time or memory. The idea is to start
with an initial approximation to the solution (which does not have to be
accurate at all), and to change this approximation in several steps to bring it
closer to the true solution. Once the approximation is sufficiently accurate,
this is taken to be the solution to the system. This leads to the class
of iterative methods.