Beruflich Dokumente
Kultur Dokumente
By
Dr Rizwan Butt
CHAPTER ONE
Number Systems and Errors
Introduction
It simply provides an introduction of numerical analysis.
1. Human Error
It causes when we use inaccurate measurement of data or
inaccurate representation of mathematical constants.
2. Truncation Error
It causes when we are forced to use mathematical techniques
which give approximate, rather than exact answer.
3. Round-off Error
This type of errors are associated with the limited number of digits
numbers in the computers.
CHAPTER 1 NUMBER SYSTEMS AND ERRORS
Introduction
Here we discuss the ways of representing the different types of
nonlinear equation f(x) = 0 and how to find approximation of its
real root .
1. Method of Bisection
This is simple and slow convergence method (but convergence is
guaranteed) and is based on the Intermediate Value Theorem. Its
strategy is to bisect the interval from one endpoint of the interval
to the other endpoint and then retain the half interval whose end
still bracket the root.
3. Fixed-Point Method
This is very general method for finding the root of nonlinear
equation and provides us with a theoretical framework within
which the convergence properties of subsequent methods can be
evaluated. The basic idea of this method is convert the equation
f(x) = 0 into an equivalent form x = g(x).
4. Newtons Method
This is fast convergence method (but convergence is not
guaranteed) and is also known as method of tangent because
after estimated the actual root, the zero of the tangent to the
function at that point is determined.
CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS
5. Secant Method
This is fast convergence method (but not like Newton’s
method) and is recommended as the best general-
purpose method. It is very similar to false position
method, but it is not necessary for the interval to
contain a root and no account is taken of signs of the
numbers f(x_n).
Acceleration of Convergence
Here we discuss a method which can be applied to any linear
convergence iterative method and acceleration of
convergence can be achieved.
Systems of Nonlinear Equations
When we are given more than one nonlinear equation. Solving
systems of nonlinear is a difficult task.
Newtons Method
We discuss this method for system of two nonlinear equations
in two variables. For system of nonlinear equations that have
analytical partial derivatives, this method can be used,
otherwise not.
CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS
Roots of Polynomials
A very common problem in nonlinear equations is to
find the roots of polynomial is discussed here.
1. Horner’s Method
It is one of the most efficient way to evaluate
polynomials and their derivatives at a given point. It
is helpful for finding the initial approximation for
solution by Newton’s method. It is also quit stable.
CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS
2. Muller’s Method
It is generalization of secant method and uses quadratic
interpolation among three points. It is a fast convergence method
for finding the approximation of simple zero of a polynomial
equation.
3. Bairstow’s Method
It can be used to find all the zeros of a polynomial. It is one of the
most efficient method for determining real and complex roots of
polynomials with real coefficients.
CHAPTER THREE
Systems of Linear Equations
Introduction
We give the brief introduction of linear equations, linear systems,
and their importance.
1. Cramers Rule
This method is use for solving the linear systems by the
use of determinants. It is one of the least efficient
method for solving a large number of linear equations.
But it is useful for explaining some problems inherent in
the solution of linear equations.
2
CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS
3. Gauss-Jordan Method
It is a modification of Gauss elimination method and is although
inefficient for practical calculation but is often useful for
theoretical purposes. The basic idea of this method is to convert
original system into diagonal system form.
4. LU Decomposition Method
It is also a modification of Gauss elimination method and here we
decompose or factorize the coefficient matrix into the product of
two triangular matrices (lower and upper).
CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS
Convergence Criteria
We discuss the sufficient conditions for the convergence of
Jacobi and Gauss-Seidel methods by showing l_∞-norm of their
corresponding iteration matrices less than one.
Iterative Refinement
We discuss residual corrector method which can be used to
improve the approximate solution obtained by any means.
CHAPTER FOUR
Approximating Functions
Introduction
We describe several numerical methods for
approximating functions other than elementary functions.
The main purpose of these numerical methods is to
replace a complicated function by one which is simpler
and more manageable.
3. Aitkens Method
It is an iterative interpolation method which is based on
the repeated application of a simple interpolation
method.
CHAPTER 4 APPROXIMATING FUNCTIONS
1. Linear Spline
One of the simplest piecewise polynomial interpolation for
approximating functions and basic of it is simply connect
consecutive points with straight lines.
2. Cubic Spline
The most widely cubic spline approximations are patched
among ordered data that maintain continuity and smoothness
and they are more powerful than polynomial interpolation.
CHAPTER 4 APPROXIMATING FUNCTIONS
Numerical Differentiation
A polynomial p(x) is differentiated to obtain p′(x), which is taken
as an approximation to f′(x) for any numerical value x.
Numerical Integration
Here, we pass a polynomial through points of a function and
then integrate this polynomial approximation to a function. For
approximating the integral of f(x) between a and b we used
Newton-Cotes techniques.
4. Romberg Integration
The Romberg integration is based on the repeated
Trapezoidal rule and using the results of repeated
Trapezoidal rule with two different data spacings, a more
accurate integral is evaluated.
5. Gaussian Quadratures
The Gauss(-Legendre) quadratures are based on
integrating a polynomial fitted to the data points at the
roots of a Legendre polynomial and the order of
accuracy of a Gauss quadrature is approximately twice
as high as that of the Newton-Cotes closed formula
using the same number of data points.
CHAPTER SIX
Ordinary Differential Equations
Introduction
We discussed many numerical methods for solving first-order
ordinary differential equations and systems of first-order ordinary
differential equations.
Numerical Methods for Solving IVP
Here we discuss many single-step numerical methods and multi-
step numerical methods for solving the initial-value problem (IVP)
and some numerical methods for solving boundary-value
problem (BVP).
1. Single-Step Methods for IVP
These types of methods are self-starting, refer to estimate y′(x)
from the initial condition and proceed step-wise. All the
information used by these methods is consequently obtained
within the interval over which the solution is being approximated.
CHAPTER 6 ORDINARY DIFFERENTIAL EQUATIONS
Boundary-Value Problems
Here, we solve ordinary differential equation with known
conditions at more than one value of the independent
variable.
CHAPTER 6 ORDINARY DIFFERENTIAL EQUATIONS
1. Power Method
It can be used to compute the eigenvalue of largest modules
(dominant eigenvalue) and the corresponding eigenvector of a
general matrix.
CHAPTER 7 EIGENVALUES AND EIGENVECTORS
Location of Eigenvalues
We deal here with the location of eigenvalues of both
symmetric and non- symmetric matrices, that is, the location
of zeros of the characteristic poly nomial by using the
Gerschgorin Circles Theorem and Rayleigh Quotient
Theorem.
Intermediate Eigenvalues
Here we discussed the Deflation method to obtain other
eigenvalues of a matrix once the dominant eigenvalue is
known.
Eigenvalues of Symmetric Matrices
Here, we developed some methods to find all eigenvalues of a
symmetric matrix by using a sequence of similarity
transformation that transformed the original matrix into a
diagonal or tridiagonal matrix.
CHAPTER 7 EIGENVALUES AND EIGENVECTORS
1. Jacobi Method
It can be used to find all eigenvalues and corresponding eigenvectors
of a symmetric matrix and it permits the transformation of a matrix
into a diagonal.
2. Sturm Sequence Iteration
It can be used in the calculation of eigenvalues of any symmetric
tridiagonal matrix.
3. Given’s Method
It can be used to find all eigenvalues of a symmetric matrix
(corresponding eigenvectors can be obtained by using shifted
inverse power method) and it permits the transformation of a matrix
into a tridiagonal.
4. Householder’s Method
This method is a variation of the Given’s method and enable us to
reduce a symmetric matrix to a symmetric tridiagonal matrix form.
CHAPTER 7 EIGENVALUES AND EIGENVECTORS