Beruflich Dokumente
Kultur Dokumente
the difference between grid row and column, and the row and column notation for a 2D array).
Figure:
In order to write this problem using matrix notation, we must select an order for the unknowns uij. The classical order is to start from the bottom grid row and then move from the left to the right. Next move up to the next grid row. So, for the above problem with n = 9 unknowns the 9x1 column vector for the unknowns is given by uT = (u11 , u21 , u31 , u12 , u22 , u32, u13 , u23 , u33 ). Once we have agreed on this order, the coefficient matrix can be written down by considering each of the above equations. Each row in the matrix represents the coefficients in the equation at the (i,j) grid, for example, the first row are the coefficients for the equation at the (1,1) grid.
A more compact way of writing this is to group the grid rows into three 3x1 vectors: U1 T = (u11 , u21 , u31 ), U2 T = (u12 , u22 , u32 ) and U3 T = (u13 , u23 , u33 ). The above 9x9 matrix the become a 3x3 block system with each block a 3x3 matrix. B I I U1 F1 B I U 2 = F2 whereB = I B U 3 F3 4 1 1 4 1 . 1 4
It is common for finite difference models in two space dimension to generate block tridiagonal matrices. The following is a generalization of the tridiagonal algorithm that was presented in the previous section. Consider a block tridiagonal system where Ai are square and nonsingular. Note a block LU factorization will have block bidiagonal matrices. A1 B 2 0 0 C1 A2 O 0 0 C2 O BN 0 1 0 B 0 2 2 = O 0 O AN 0 0 0 0 O BN 0 I 1 0 0 0 I O 0 0 . 0 0 0 I N 1 N 0 0 0 I
The equations for the alpha and gamma matrices can be found just as in the 1x1 block case, but now one must be careful about the order of matrix products and to replace all divisions by matrix solves.
Block Tridiagonal Algorithm (Thomas Algorithm): 1 = A1 , Solve 11 = C1 , Solve 1Y1 = D1 for i = 2 : N i = Ai Bii 1 Solve ii = Ci Solve iYi = ( Di BYi 1 ) i X N = YN for i = N 1: 1:1 X i = Yi i X i +1
Proposition 8. If A has square diagonal blocks and is SPD (symmetric positive definite), then i are SPD.
Proof. A formal proof should use mathematical induction with respect to the number of diagonal blocks. Here we outline the proof by considering the 3x3 block case. The first and second alpha hats are 1 = A1 and 2 = A B2 A1 C1 . 1 1
Because the matrix is SPD, the following are also SPD A A1 and 1 B2 C1 . A2
Therefore, the second alpha hat must also be SPD because it is just the Schur complement of A1 in the 2x2 block matrix. Because A1 is nonsingular and the 3x3 matrix is symmetric, the following block row and block column operations give I B A-1 2 1 A1 B 2 I 0 C1 A2 B3 0 I C2 A3 ( B2 A1-1 )T I 0 A1 0 = 0 I 0 0 A1 B2 A-1C1 1 B3 0 C2 A3
Therefore, the matrix on the right and its 2x2 diagonal block must also be SPD. Now repeat the above argument on this 2x2 block. Note the Schur complement of the second alpha hat in this 2x2 matrix is just the third alpha hat.
Operation Counts.
there are n = N2 unknowns. Full Gaussian elimination will require about 2(N 2 )3 /3 operations, and the block tridiagonal algorithm will require about 4(N 2 )2 operations where the solves have been done by LU factorization and lower and upper solves. In particular, the forward loop has N-1 matrix-matrix products each with 2N3 operations, N-1 LU factorizations each with 2N3 /3 operations and N2 triangular solves each with N2 operations. Consequently, for block tridiagonal matrices the block tridiagonal algorithm requires both less operations and less storage.