Sie sind auf Seite 1von 7

Lecture 1: PRELIMINARIES

1. What is Computational Linear Algebra


Computational/Numerical Linear Algebra consists of TWO major components:
Computational/numerical & Linear Algebra
In short, Computational/Numerical Linear Algebra involves solving linear algebra
problems using a computer or numerical computations.
Numerical computation uses strictly numbers.Computational => numbers => numerical
Example 1.1:
Algebraic computation:
1 1 2
+ =.
3 3 3
Numerical computation (4-digit arithmetic):
0.3333 + 0.3333 =
0.6666
Observations from example: While algebraic computation is always exact, it is common
to make approximation when performing numerical calculations.
When approximations are made, errors will be introduced and this can cause many
undesirable consequences such as
i)
inaccuracy of the solution;
ii)
instability of numerical procedures.
Another important aspect of numerical computation is timing; the time it takes to perform
a particular numerical procedure. Basically, every arithmetic operations +, , , / has a
specific processing time. Computational time (cost) of a certain numerical procedure will
depend on how many of these basic operations it needs to carry out
(efficiency/complexity).
Computational solutions of mathematical problems are obtained via numerical
procedures called algorithms. The important properties that we will study consists of the
following:
i)
efficiency (complexity) and stability of an algorithm;
ii)
conditioning of a problem and how it will affect the algorithm.
These properties will help us understand how to balance accuracy, efficiency and stability
for a particular algorithm.

1.1 Fundamental Linear Algebra Problems


A. THE LINEAR SYSTEM PROBLEM:
Given an n n nonsingular matrix A , and an n -vector b , the problem is to find an n vector x such that
Ax = b .
B. THE LEAST SQUARES PROBLEM:
Given an m n matrix A , and an m -vector b , the least squares problem is to find an
n -vector x such that the norm of the residual vector, Ax b is as small as possible.
2

C. THE EIGENVALUE PROBLEM


Given an n n nonsingular matrix A , the problem is to find n numbers
vectors xi such that

Axi = i xi ,

i and n -

i = 1,2,, n .

1.2 Computational Difficulties in Solving Linear Algebra Problems using


Obvious Approaches

Solving a linear system of equation using Cramers Rule: Solving a 20 20 linear


system on a fast modern day computer might take more than a million years.
Cramers Rule:
Given a linear system of equations Ax = b ,where the square matix A is invertible and
the vector x = [ x1 , x2 , , xn ] is the column vector of the variables, then the theorem
T

says,

xi =

det ( A i )
,
det ( A )

i = 1, 2, , n ,

where A i is the matrix formed by replacing the i the column of A by the column vector

b.

Computing the unique solution of a linear system of equation using matrix


inversion: The unique solution of a nonsingular system can be written explicitly as
x = A 1b .
The computation of inverse is about 3 times as expensive as solving the linear equation
itself using standard elimination procedure, and, often leads to more inaccuracies.

Solving the least squares problem by normal equations: If the m n matrix A has
full rank, and m n , the the least squares problem has a unique solution and this
solution is theoretically given by the solution of the linear system
AT Ax = AT b .
This is called the normal equation. This procedure has severe numerical limitations. First,
in finite arithmetic, in the formation of AT A , some vital information might be lost.
Second, the normal equation is more sensitive to perturbations compared to the linear
system Ax = b , and this sensitivity may corrupt the solution.

Computing the eigenvalues of a matrix by solving its characteristic polynomial


Not a computationally viable approach because roots of polynomials are sensitive to
perturbations.

NUMERICAL LINEAR ALGEBRA DEALS WITH IN-DEPTH ANALYSIS OF


COMPUTATIONAL DIFFICULTIES, INVESTIGATIONS INTO HOW THESE
DIFFICULTIES CAN BE OVERCOME IN CERTAIN INSTANCES, AND,
FORMULATIONS AND IMPLEMENTATIONS OF VIABLE NUMERICAL
ALGORITHMS FOR SCIENTIFIC AND ENGINEERING USE.

2. Floating Point Operations


In solving a numerical problem on a computer, we do not usually expect to get the exact
answer. Some amount of error is inevitable.
Roundoff error may occur initially when the data are represented in the finite number
system of the computer. Further, roundoff error may occur whenever arithmetic
operations are used. These errors may grow to such an extent that the computed solution
may be completely unreliable.
To avoid this, one must understand how computational errors occur. To do this, one must
be familiar with the type of numbers used by the computer.
Definition 2.1: A floating-point number in base b is a number of the form
d
d d
1 + 22 + + tt b k ,
b
b b
where t , d1 , d 2 ,, b, e are all integers and
0 di b 1
i =
1, , t .
The integer t refers to the number of digits and this depends on the word length of the computer.
The exponent k is restricted to be within certain bounds, L k U , which also depends on the
particular computer..

Example 2.1:
The following are five-digit decimal (base 10) floating-point numbers
0.53216 104
0.81724 1021
0.00112 108
0.11200 106

Note that the numbers 0.00112 108 and 0.11200 106 are equal. Thus the floating-point
representation of a number need not be unique. Floating-point numbers that are written with no
leading zeros are said to be normalized.
Example 2.2 (MATLAB examples):
i) Investigating the size of t
ii) Investigating the range for k

Most real numbers have to be rounded off in order to be representation as t -digit floating-point
numbers. The difference between the floating-point number x and the original number is called
the roundoff error.
When we do further operation on the number (add, substract, multiply etc.), more roundoff error
will occur.
Because of these reasons, we cannot expect to get exact solution to the original problem.
For example, suppose we want to solve

Ax = b .

When entries of A and b are read into the computer, roundoff errors will generally occur.
Thus, the program will actually be attempting to compute the solution to
b (perturbed problem)
( A + E) x =
Definition 2.2: Let x ' denote an approximation to x , then there are two ways in measuring error:
Absolute Error= x ' x ,
Relative Error =

x ' x
x

x0.

The relative error makes more sense than the absolute error. The following example explains this.
Example 2.3:
=
x1 1.31,
=
x '1 1.30 ,

=
x2 0.12,
=
x '2 0.11 .

The absolute errors in both cases are the same: x '1 x1 = x '2 x2 = 0.01 .
On the other hand, relative errors are,
x '1 x1 0.01
= = 0.0076335 ,
x1
1.31
x '2 x2 0.01
= = 0.0833333 .
x2
0.12

Thus, the relative error shows that x '1 is closer to x1 than x '2 to x2 , whereas the absolute error
gives no indication of this at all. This example is able to tell us that, although the errors affecting
x1 and x2 are of the same size, but the effects on x2 is more profound.
Example 2.4:
Consider the following matrices,

1 2
1 1
=
A =
, B
.
3 4
0.9 1
Suppose the entry in the ( 2,1) position of both matrices is affected by an error of 0.1. Thus the
approximation of the matrices are

1 2
1 1
=
A ' =
, B'
.
3.1 4
1 1
Check the determinants of the matrices and their corresponding approximations to discover the
affects of the error on the properties of the matrices.

More examples of absolute and relative errors:


Real number x

62,133
0.12658
47.213

Four-digit decimal
floating-point
number x '
0.6213 105
0.1266 100
0.4721102
0.3142 101

Absolute error,
x ' x

Relative error,
x ' x x

4.8 105
1.6 104
6.4 105
1.3 104

2 105
3.0 103
3.142 4 104

When arithmetic operations are applied to floating-point numbers, additional roundoff


errors may occur.
Example 2.5:
Let
=
a 0.263 104 and
=
b 0.466 101 be three-digit decimal floating-point numbers. If
these numbers are added, the exact sum will be
5

=
a + b 0.263466 104 .

However, the floating-point representation of this sum is 0.263 104 . This then should be
the computed sum. We will denote the floating-point sum by fl ( a + b ) .
The absolute error in the sum is
fl ( a + b ) ( a + b ) =
4.66 ,

and the relative error is


4.66
0.263466 10

0.18 102 .

Floating-point substraction, multiplication and division can be done in a similar manner.

Example 2.6:
Floating point operation for addition of matrices,

a '11 a '12 a11 + a11 a12 + a12


=
fl ( A ) =

,
a '21 a '22 a21 + a21 a22 + a22
b '11 b '12 b11 + b11 b12 + b12
=
fl ( B ) =
,

b '21 b '22 b21 + b21 b22 + b22
fl ( a '11 + b '11 )
fl ( A=
+ B ) fl ( fl ( A ) + fl =
(B ))
fl ( a '21 + b '21 )

fl ( a '12 + b '12 )
.
fl ( a '22 + b '22 )

EXERCISES:
1. Find the three-digit decimal floating-point representation of each of the following
numbers.
a) 2312

b) 32.56

c) 0.01277

d)82.431

2. Find the absolute error and the relative error when each of the real numbers in
Exercise 1 is approximated by a three-digit decimal floating-point number.
3. Do each of the following using four-digit decimal floating-point arithmetic and
calculate the absolute and relative errors in your answers.
a) 10, 420 + 0.0018
c) 0.12347 0.12342

b) 10, 424 10416


c) ( 3626.6 ) ( 22.656 )

4. Let x1 = 94, 210 , x2 = 8631 , x3 = 1440 , x4 = 133 and x5 = 34 . Calculate each of the
following using four-digit decimal floating-point arithmetic.
a)

((( x + x ) + x ) + x ) + x
1

b) x1 + ( ( x2 + x3 ) + ( x4 + x5 ) )
c)

((( x

+ x4 ) + x3 ) + x2 + x1 .

Das könnte Ihnen auch gefallen