Sie sind auf Seite 1von 5

Solution

A = zeros(100,100);
A = diagona(1:100) + diagonal(ones(99,1),1) + diagonal(ones(99,1),-1);
b = ones(100,1);

% Steepest Descent
residual_SD = zeros(100,1);
xk = zeros(100,1);
rk = b;
k = 0; tol = 1e-7;
while norm(rk)>tol && k<100
k = k + 1;
alpha_k = (rk’*rk)/(rk’*A*rk);
xk = xk + alpha_k*rk;
rk = b - A*xk;
resid_SD(k) = norm(rk);
end

% Conjugate Gradients method


residual_CG = zeros(100,1);
x = zeros(100,1);
r = b;
k = 0; rho_c = b’*b;
while norm(rk)>eps*norm(b) && k<100
k = k + 1;
if k==1
s = r;
else
beta_c = rho_c/rho_m;
s = r + beta_c*s;
end
g = A*s;
alpha_c = rho_c/(s’*g);
x = x + alpha_c*s;
r = r - alpha_c*g;
rho_m = rho_c;
rho_c = r’*r;
resid_CG(k) = sqrt(rho_c);
end

figure
hold on
plot(residual_CG,’r’,’linewidth’,2)
plot(residual_SD,’linewidth’,2)
ylabel(’Error norm’)
xlabel(’Iteration’)
legend(’CG’,’SD’)
Run the script which gives us Figure A as shown
The process of Conjugate gradient as it goes along it is effectively shrinking the vectror
space size since it never needs to find or searches that direction again once it searches
a direction and A be the condition

number which restricts to the subspace remaining and gets better and getting
convergence of super linear and which observed that residual fact on lower or decreases
much faster than exponentially and on a semilog scale it is a straight line

The minimum fractional error |λ| which has eigen value λ ad iteration function as
multiples of matrix vector for Lanczos and Lanczos restarted with vector of Ritz for every
iterations of 10 and the irregular convergence since Lanczos has trouble in achieving
the spectrum interior perfectly
and using conjugate gradient or steepest descent SD the residual convergence shown
and the actual residual of CG which has ||Ax-b||and the residual from estimated
upper bound as shown.

We can quantify this in various ways. For example, since on each step the residual
is “supposed” to decrease by a factor f =(√κ-1)/(√κ +1), define an “effective” condition
number κ˜n from the actual rate of decrease fn =(√√κ˜n 1)/(√κ˜n +1)= || rn+1/rn ||. Thus,
√κ˜n =(1 + fn)/(1- fn). By the time, the iterations 30 which have passed, for κ˜30 is only
38;, κ˜40 ≈21. for 40 iterations; and for 60 iterations, κ˜60 ≈8.8. For the eigenvalues set,
we see that to separate the two λ of smallest results in giving a condition number of 34,
and separating the 11 as the smallest eigenvalues results in a condition number of
8.4. Separating eigenvalues which has the largest does not reduce or decrease the
condition number nearly as rapidly. So from the spectrum as the algorithm progresses,
CG is effectively eliminating the smallest eigenvalues enlightening the condition number
of A and henceforth the rate of convergence is better than the pessimistic bound

The code for Conjugate gradient is as shown below

% Usage: [x, residualnorm, residualnorm2] = CG(A, b, x, nmax, tol)


%
% Performs nmax steps of CG to solve Ax = b for x,
% given a preliminary guess x (that is a random vector). A must be
% Hermitian positive-definite. Yields or returns the improved solution x.
%
% Stops on every occasion nmax steps are implemented.
%
% residualnorm is an array of length nmax of the residuals |r| as
% computed through the SD iterations. residualnorm2 is the similar manner,
% while using |b -A*x| in its place through the updated r vector.
function [x, residualnorm, residualnorm2] = CG(A, b, x, nmax)
q = zeros(size(x));
Ap = q;
r = b -A*x;
rr = r’ * r;
alpha = 0;
initresidual = norm(r);
for n = 1:nmax
r = r -alpha * Aq;
residualnorm(n) = norm(r);
residualnorm2(n) = norm(b -A*x);
rrnew = r’ * r;
q = r + q * (rrnew / rr);
rr = rrnew;
Aq = A*q;
alpha = rr / (q’ * Aq);
x = x + alpha * q;
end

Das könnte Ihnen auch gefallen