Beruflich Dokumente
Kultur Dokumente
1 1 1
|B|= λ1 λ2 λ3 = − ( λ1 − λ2 ) ( λ2 − λ3 )( λ3 − λ1 ) ≠ 0
λ12 λ22 λ32
Hence B–1 exists.
aX1
Multiplying (4) by B–1 results in, bX2 = 0.
cX
3
But this requires a = b = c = 0 which is contrary to the hypothesis.
Thus X1, X2, X3 are linearly independent.
Theorem 2: If λ be a non-zero characteristic root (eigen value) of the non-singular n-
| A|
square matrix A, then is a charactristic polynomial of adjoint A.
λ
Proof: For non-singular n-square matrix A, the ‘characteristic polynomial’
φ(λ) = |λI – A| = λn + s1λn – 1 + s2λn – 2 + … + sn– 1 λ1 + (–1)n |A| …(1)
where sr (r = 1, 2, …, n – 1) is (–1)r times the sum of all the r-square principal minors of A.
Corresponding characteristic equation is given by
λn + s1λn–1 + s2λn–2 + … + (–1)n |A| = 0 …(2)
and on the same lines
|µI – Adj · A|= µn + s1µn–1 + s2µn – 2 + … + sn – 1 µ + (–1)n|Adj · A| …(3)
r
where sr (r = 1, 2, …, n – 1) is (–1) times the sum of the r-square principal minors of Adj ·A.
Thus by the property |adj A| = |A|n–1 and definition of sr
s1 = (−1)n sn −1
s2 = ( −1) A sn −2,
n
we have
M
…(4)
M
sn−1 = ( −1) A s1;
n
n −1
µ µ
n
n µ
= ( −1) 1 + s1 + …+ sn −1 + (−1) A = f (µ )
n
A A A …(5)
Now
() () ( ) A
n −1
A n
n
1 1 1
= ( −1) 1 + s1 + ( −1)
n
f + …+ sn−1 …(6)
λ λ λ λ
and by equation (2), we have
A
λn f
λ
n
{
= ( −1) λ + s1 λ + … + sn −1 λ + ( −1) A = 0
n n−1 n
}
Hence, A is a characteristic root of adjoint A.
λ
Theorem 3: Eigen values (characteristic roots) of orthogonal matrix A are of absolute value 1.
Proof: Let λi, Xi be characteristic roots and associated (characteristic vectors) invariant vectors
of an orthogonal matrix A, then
Xi´ Xi = X1´ (A´ A) Xi = (AXi)´ (AXi), since for orthogonal A, A´A = I
⇒ Xi´ Xi = (λiXi)´ (λiXi) = (λi´Xc´ ) (λiXi) = λiλi Xi´ Xi
or (1 – λiλi) Xi´Xi = 0 implies (1 – λiλi) = 0, since xi´ xi ≠ 0
Thus |λi| = 1.
Proof: For characteristic value λi and corresponding characteristic vector Xi of the orthogonal
matrix A, we have
Xi´ Xi = Xi´ (A´ A) Xi = (AXi)´(AXi), (as A is given orthogonal)
⇒ Xi´Xi = (λiXi)´ (λiXi) = λiλiXi´Xi, Using the transformation, AXi = λiXi
⇒ (1 – λiλi)Xi´ Xi = 0
⇒ Either (1 – λiλi) = 0 or Xi´ Xi = 0 But λi ≠ ±1
Hence Xi´Xi = 0.
Theorem 5: For a symmetrical square matrix, show that the eigen vectors corresponding to
two unequal eigen values are orthogonal. [NIT Kurukshetra, 2004; KUK, 2004, 2006 ]
Proof: Let A be any symmetric matrix i.e., A´ = A and λ1 and λ2 two unequal eigen values,
i.e., λ1 ≠ λ2
Let X1 and X2 be the two corresponding eigen vectors.
Now for λ1, (A – λ1I) X1 = 0
Matrices and Their Applications 57
x1 y1
If X1 = x2 and X = y
x 2
y
2
3 3
x1
=
∴ X ´X
2 1 1 2 3 x2 = y1x1 + y2 y2 + y3 y3
y y y
x3
Clearly, (y1x1 + y2x2 + y3x3) = 0
This means, the two system of co-ordinates are orthogonal.
∴ Hence the transformation is an orthogonal transformation.
– 2 2 – 3
Example 33: Determine the eigen values and eigen vectors of A = 2 1 – 6
– 1 – 2 0
[NIT Kurukshetra, 2008]
Solution: The characteristic equation,
−2 − λ 2 −3
2 1− λ −6 = 0 or λ3 + λ2 –21λ – 45 = 0
−1 −2 −λ
⇒ The roots of above equation are 5, –3, –3.
Putting λ = 5, the equations to be solved for x1, x2, x3 are [A – λI]x = 0
i.e. –7x + 2y – 3z = 0, 2x – 4y – 6z = 0, –x – 2y – 5z = 0.
Note that third equation is dependent on first two i.e. R1 + 2R2 ¾ R3
Solving them, we get x = k, y = 2k, z = –k
Similarly for λ = –3, the equations are
x + 2y – 3z = 0, 2x + 4y – 6z = 0, –x – 2y + 3z = 0
Second and third equations are derived from the first. Therefore, only one equation is
independent in this case.
58 Engineering Mathematics through Applications
6 –2 2
Example 34: Find Eigen values and Eigen vectors for A = – 2 3 – 1 .
2 –1 3
ASSIGNMENT 2
1. The characteristic roots of A and A´ are the same.
2. The characteristic roots of A and A´ are the conjugates of the characteristic roots of A.
Matrices and Their Applications 59
3. If λ1, λ2, …, λn are the characteristic roots of an n-square matrix A and if k is a scalar,
then kλ1, kλ2, …, kλn are the characteristic roots of kA.
4. If A is a square matrix, show that the latent roots of ‘A’ are identical.
A more general transformation than (1) will be obtained when the new axes are rotated
through different angles θ and φ, and then angle does not remain a right angle.
So, the most general linear transformation in two dimensions is
x´= a1x + b1y
y´= a2 x + b2 y …(2)
then the transformation of (x1, x2, x3) to (z1, z2, z3) is given by Z = (BA)X, where
1 1 1 2 1 0 1 4 −1
BA = 1 2 3 0 1 −2 = −1 9 −1
−1 3 5 −1 2 1 −7 12 −1
Observations: It is seen that every square matrix defines a linear transformation. Further more, it is possible
to write the inverse transformation X = A–1Y for only non-singular matrix A.
ξ = x cos α − y sin α
Solution: Given η = x sin α + y cos α } …(1)
ξ cos α − sin α x
η = sin α cos α y …(2)
Matrices and Their Applications 61
ξ cos α − sin α
or more precisely, Y = AX, where Y = , A = x
and X = , representing
η sin α cos α y
linear transformation with A as the matrix of transformation.
cos α sin α
Now, A' =
− sin α cos α
…(3)
2 – 3 1
Example 36: Is the matrix 4 3 1 orthogonal? If not, can it be converted into an
– 3 1 9
orthogonal matrix? [KUK, 2005]
Solution: Let the given matrix be A. Then to check its orthogonality, find AA'
Thus
2 −3 1 2 4 −3
AA´ = 4 3 1 −3 3 1
−3 1 9 1 1 9
4+9+1 8 − 9 + 1 −6 − 3 + 9 14 0 0
8− 9+1 16 + 9 + 1 −12 + 3 + 9 = 0 26 0
=
−6 − 3 + 9 −12 + 3 + 9 9 + 1 + 81 0 0 91
2 3 1
−
14 14 14
4 3 1
Hence, the orthogonal form of the matrix A is .
26 26 26
− 3 1 9
91 91 91
62 Engineering Mathematics through Applications
l m n 0
0 0 0 – 1
Example 37: Prove that is orthogonal, when l = 2 , m = 3 , n = 6 .
n l –m 0 7 7 7
– m n – l 0
Solution: If we denote the given matrix by ‘A’ then it implies that (l, m, n) must have
( 72 , 73 , 76 ) is their one of the values that makes A as an orthogonal matrix. In other words,
2 3 6
deduce that AA' = I is possible with l = ,m= ,n= .
7 7 7
l m n 0 l 0 n −m
0 0 0 −1 m 0 l n
AA´=
Now n l −m 0 n 0 −m −l
−m n −l 0 0 −1 0 0
l2 + m2 + n2 0 nl + ml − mn −lm + mn − nl
0 1 0 0
AA´ =
⇒ nl + ml − nm 0 n + m + l −nm + ln + lm
2 2 2
−
ml + nm − ln 0 −mn + nl + ml m + n + l
2 2 2
l l
From (1), we have, m + n = 1
l l
Let = k , then = (1 − k ) …(3)
n m
or m =
l 2 3l
1 =
Again suppose k = , then m 3 2
3 …(4)
l 1
= or n = 3l
n 3
∴ Then using (4) in (2), we get
l2 + m2 + n2 = l2 + l2 + 9l2 =
9 49 2
l =1
4 4
2 2
⇒ Either l = or l=− …(5)
7 7
3 6
Taking l = 2 , we get m = and n =
7 7 7
Theorem 1: Prove that both ‘the inverse and transpose’ of an orthogonal matrix are also
orthogonal.
Solution: As we know that for an orthogonal matrix say A,
AA' = I = A'A and A' = A–1
Let A–1 = B
Case I: Then for B to be an orthogonal, we are to prove that
BB' = B'B = I
∴ BB' = (A–1) (A–1)´ = A–1(A')–1 = A–1(A–1)–1 = A–1A = I
Similarly, B'B = (A–1)' A–1 = (A')–1A–1 = (A–1)–1A–1 = AA–1 = I
Hence inverse of an orthogonal matrix is also an orthogonal.
Case II: Let A' = B. For B to be orthogonal, we need to prove that
BB´ = I = B´B
∴ BB´ = A'(A')' = A'A = I;
Also B'B = (A')'A' = AA' = I
Hence transpose of an orthogonal matrix is also orthogonal.
Theorem 2: A linear transformation preserves length if and only if its matrix is orthogonal.
Solution: Let Y1, Y2 be the respective images of X1, X2 under the linear transformation
Y = AX
Suppose A is orthogonal, then AA' = I = A'A
Now,
Y1 · Y2 = Y1'Y2 = (AX1)'(AX2) = X1´ (A´A)X2 = X1 · X2 inner product.
Hence the transformation preserves length.
For vice versa, suppose lengths (i.e., inner products) are preserved.
Then, Y1 · Y2 = Y1' Y2 = (AX1)´ (AX2) = X1´ (A´A) X2
But, Y1 · Y2 = X1 · X2 (given) i.e., X1´ (A´A)X2 must be equal to X1 · X2 which is only possible
when A´A = I
Hence A is orthogonal.
1 2 2
3 3 3
2
For example, the linear transformation Y = AX = 2 1
− X
3 3 3
2 2 1
−
3 3 3
is orthogonal.
The image of X = [a b c]' is Y = +
a 2b 2c 2a b 2c 2a 2b c
+ + − − +
3 3 3 3 3 3 3 3 3
and both vectors are of length a2 + b2 + c2 .
64 Engineering Mathematics through Applications
a b c
Example 38: Given that A = b c a , where a, b, c are roots of x3 + x2 + k = 0 (k is a
c a b
constant). Prove that A is orthogonal.
a2 + b2 + c2 ab + bc + ca ca + ab + bc
= ab + bc + ca b2 + c2 + a2 bc + ca + ab
…(4)
ca + ab + bc bc + ca + ab c + a + b
2 2 2
1 0 0
On using (1), (2), (3) and (5) AA´= 0 1 0 = 1
0 0 1
Hence with a, b, c as the roots of the given cubic, the matrix A is an orthogonal.
l1 m1 n1
Example 39: If l2 m2 n2 defines an orthogonal transformation, then show that
l m n
3 3 3
lilj + mimj + ninj = 0(i ≠ j); = 1(i = j); i, j = 1, 2, 3.
Matrices and Their Applications 65
Solution: We know that for an orthogonal matrix A, AA´ = I = A´A and A´ = A–1
l1 m1 n1 l1 l2 l3
∴ AA´= l2 m2 n2 m1 m2 m3 , for given A.
l m3 n3 n1 n2 n3
3
l12 + m12 + n12 l1l2 + m1m2 + n1n2 l1l3 + m1m3 + n1n3
= l2 l1 + m2 m1 + n2 n1 l22 + m22 + n22 l2 l3 + m2 m3 + n2 n3
l3l1 + m3m1 + n3 n1 l3l2 + m3 m2 + n3n2 l32 + m32 + n32
For A to be an orthogonal, AA´ = I which is possible only if,
(l12 + m12 + n12) = (l22 + m22 + n22) = (l32 + m32 + n32) = 1
and (l1l2 + m1m2 + n1n2) = (l2l3 + m2m3 + n2n3) = (l3l1 + m3m1 + n3n1) = 0.
ASSIGNMENT 3
1. Prove that the product of two orthogonal matrix is orthogonal.
cos θ 0 sin θ
2. Prove that the matrix 0 1 0 is an orthogonal matrix.
− sin θ 0 cos θ
a b c
3. Given that A = b c a , where a, b, c are the roots of x3 + x2 + k = 0
c a b
(where k is a constant). Prove that ‘A’ is orthogonal.
4. Show that the modulus of an orthogonal transformation is either 1 or –1.
[Hint: Since AA' = I, then |A||A'| = |1|]
λ1x1 λ3 x3 x1 x2 x3 λ1 0 0
λ2 x2
= λ1y1 λ3 y3 = y1 y2 y3 0 λ2 0
λ2 y2
λ z λ3 z3 z1 z2 z3 0 0 λ3
λ2 z2
11
= PD, where D is the diagonal matrix such that P–1 AP = D.
The resulting diagonal matrix D, contains the eigen values on its diagonal.
This transformation of a square matrix A by a non-singular matrix P to P–1AP is
termed as Similarity Transformation. The matrix P which diagonalizes the
transformation matrix A is called the Modal Matrix and the matrix D, so obtained by
the process of diagonalization is termed as Spectral Matrix.
Observations: The diagonalizing matrix for matrix An×n may contain complex elements because the
zeros of the characteristics equation of An×n will be either real or in conjugate pairs. Further, diagonali-
zing matrix is not unique because its form depends on the order in which the eigen values of An×n are
taken.
(iv) The difference to the number of positive terms and negative terms to the quadratic form is the signature
of the quadratic form.
4. Nature of Quadratic Forms: Let Q = X´AX be a quadratic form in n variables x1, x2, …, xn.
Index of a quadratic form is the number of positive terms in its canonical form and
signalize of the quadratic form is the difference of positive and negative number of
terms in its canonical form.
A real quadratic form X'AX is said to be
(i) positive definite if all the eigen values of A are > 0 (in this case, the rank r and
index, s of the square matrix A are equal to the number of variables, i.e. r = s = n);
(ii) negative definite if all the eigen values of A are < 0 (here r = n and s = 0);
(iii) positive semi-definite if all the eigen values of A ≥ 0, with atleast one eigen value
is zero (in this case, r = s < n);
(iv) negative semi-definite if all the eigen values of A are ≤ 0 with at least one eigen
value is zero (it is the case, when r < n, s = 0);
(v) indefinite if the eigen values occur with mixed signs.
5. Determination of the Nature of quadratic Form without Reduction To Canonical
Form: Let the quadratic form
a11 a12 a13 x
X´ AX = x y z a21 a22 a23 y
a a32 a33 z
31
a11 a12 a13
a a
Let A1 = a11 , A2 = 11 12 , A3 = a21 a22 a23
a21 a22 a
31 a32 a33
Then the quadratic form X´AX is said to be
(i) positive definite if Ai > 0 for i = 1, 2, 3;
(ii) negative definite if A2 > 0 and A1 < 0, A3 < 0;
(iii) positive semi-definite if Ai > 0 and atleast one Ai = 0;
(iv) negative semi-definite if some of Ai are zero in case (ii);
(v) indefinite in all other cases;
Example 40: Obtain eigen values, eigen vectors and diagonalize the matrix,
8 –6 2
A = – 6 7 – 4 . [NIT Jalandhar, 2005]
2 –4 3
Solution: The corresponding characteristic equation is
8 − λ −6 2
−6 7 − λ −4 = 0 ⇒ – λ3 + 18λ2 – 45λ = 0
2 −4 3 − λ
Clearly, it is a qubic in λ and has roots 0, 3, 15.
If x1, x2, x3 be the three components of an eigen vector say ‘X’ corresponding to the eigen
values λ, then
68 Engineering Mathematics through Applications
8 − λ −6 2 x1
We have [ A − λ ] X = −6 7 − λ −4 x2 = 0
2 −4 3 − λ x
3
For λ = 0, 8x1 – 6x2 + 2x3 = 0
–6x1 + 7x1 – 4x3 = 0
2x1 – 4x2 + 3x3 = 0
These equations determine a single linearly independent solution.
= k (say )
x1 x2 x3
On solving them, = =
21 − 16 −8 + 18 24 − 14
⇒ (x1, x2, x3) = (k, 2k, 2k)
∴ Let the linearly independent solution be (1, 2, 2), as every non-zero multiple of this vector
is an eigen vector corresponding to λ = 0.
Likewise, the eigen vectors corresponding to λ = 3 and λ = 15 are the arbitrary non-zero
multiple of vectors (2, 1, –2) and (2, –2, 1).
Hence the three eigen vectors may be considered as (1, 2, 2), (2, 1, –2), (2, –2, 1).
1 2 2
∴ The diagonalizing matrix ‘P’ = X1 X2 X3 = 2 1 −2 .
2 −2 1
Example 41: Find the Latent roots, Eigen vectors, the modal matrix (i.e., diagonalizing
1 0 0
matrix (‘P’), sepectral matrix of the given matrix 0 3 – 1 and hence reduce the
0 – 1 3
2 2 2
quadratic form x1 + 3x2 + 3x3 – 2x2x3 to canonical form.
1− λ 0 0
0 3 − λ −1 ⇒ λ3 − 7 λ2 + 14λ − 8 = 0
0 −1 3 − λ
Clearly, it is a qubic in ‘λ’ and has three values, viz. 1, 2, 4.
Hence the latent roots of ‘A’ are 1, 2 and 4.
If x, y, z be the three components of eigen vector corresponding to these eigen values,
λ = 1, 2, 4, then
0 0 0 x1
for λ = 1, 0 2 −1 [ X1 ] = 0 with X1 = y1
0 −1 2 z
1
1
⇒
2y1 − z1 = 0
−y1 + 2z1 = 0 } having one of the possible set of values, say, 0
0
Matrices and Their Applications 69
Likewise,
−1 0 0 x2 x2 0
for λ = 2, 0 1 − 1 y2 = 0 ⇒ X2 = y2 = 1
0 −1 1 z2 z 1
2
−3 0 0
for λ = 4, 0 −1 −1 [ X3 ] = 0 or y3 + z3 = 0
0 −1 −1
x3 0
∴ X3 = y3 = 1
z −1
3
1 0 0
Hence, we have Modal Matrix, P = X1 X2 X3 = 0 1 1
0 1 −1
λ1 0 0 1 0 0
and Spectral Matrix ‘D’ = 0 λ2 0 = 0 2 0
0 0 λ3 0 0 4
Canonical form as: λ1x2 + λ2y2 + λ3z2, i.e. x2 + 2y2 + 4z2
–1 2 –2
Example 42: Reduce the matrix 1 2 1 to the diagonal form and hence reduce it to
–1 –1 0
canonical form. [UP Tech, 2006; Raipur, 2004]
−1 − λ 2 −2
1 2−λ 1 =0 ⇒ λ3 – λ2 – 5λ + 5 = 0 ⇒ λ =1± 5
−1 −1 −λ
λ1 0 0 1 0 0
D = 0 λ2 0 = 0 5 0
∴ 0 0
3 0 0 − 5
x −1 − λ 2 −2 x
Let X = y be an eigen vector, so that 1 2−λ 1 y = 0
z −1 −1 −λ z
70 Engineering Mathematics through Applications
1 5 −1 5 + 1
‘P’ = 1 1 −1 the diagonalizing matrix.
1 1 1
cosθ sinθ 1 2h
H= with θ = tan–1
– sin θ cosθ 2 ( – b)
a
a h
changes the matrix C = to the diagonal form D = HCH’.
h b
1 2h
θ= tan−1
( − b) , i.e. (a – b) sinθ cosθ – h(cos θ – sin θ) = 0
as 2 2
2 a
Hence the result.
6 –2 2
Example 44: Find the eigen vector of the matrix –2 3 –1 and hence reduce
2 –1 3
6x2 + 3y2 + 3x2 – 2yz + 4zx – 4xy to a sum of squares. [KUK, 2006, 04, 01]
Matrices and Their Applications 71
6 − λ −2 2
−2 3 − λ −1 = 0 …(1)
2 −1 3 − λ
−2 −2 2 x1
[ A − λI ]X = −2 −5 −1 x2 = 0
2 −1 −5 x
3
giving equations, –2x1 – 2x2 + 2x3 = 0
–2x1 – 5x2 – x3 = 0
x1 x2 x3
Solving them, we get = =
2 −1 1
∴ X = [2, – 1, 1].
1 1 2
Hence P= 2 0 −1
−0 −2 1
The ‘sum of squares’ viz. the canonical form of the given quadratic is
8x2 + 2y2 + 2z2 = 4x2 + y2 + z2
Example 45: Reduce the quadratic form 2xy + 2yz + 2zx to the canonical form by an
orthogonal reduction and state its nature.
[Kurukshetra, 2006; Bombay, 2003; Madras, 2002]
72 Engineering Mathematics through Applications
0 1 1
Solution: The given quadratic form in matrix notations is A = 1 0 1
1 1 0
The eigen values for this matrix are 2, –1, –1 and the corresponding eigen vectors for
1
λ = 2,
x1 = 1 ;
1
(Eigen vector corresponding to the repeated eigen value –1,
−1
λ = −1, x2 = 1 ;
is obtained by assigning arbitrary values to the variable
0
as usual.)
0
λ = −1, x3 = 1
−1
Here we observe that x2 and x3 are not orthogonal vectors as the inner product,
x2 · x3 = –1(0) + 1(1) + 0(–1) ≠ 0.
−1
Therefore, take x3 = − 1 so that x1, x2 and x3 are mutually orthogonal.
2
1 1 1
− −
3 2 6
1 1 1
Now, the normalized modal matrix P = −
3 2 6
1 2
0
3 6
1 1 1
− −
x 3 2 6 x
1 1 −1 1
Consider the orthogonal transformation X = PY, i.e. y = y .
z 3 2 6 1
1 2 1
z
0
3 6
Using this orthogonal transformation, the quadratic form reduces to canonical form,
Q = 2x´ 2 – y´ 2 – z´2. The quadratic form is an indefinite in nature as the eigen values are with
mixed sign and rank r = 3; index s = 1.
Example 46: Reduce the quadratic form 3x12 + 3x22 + 3x32 + 2x1x2 + 2x1x3 – 2x2x3 into ‘a sum of
squares’ by an orthogonal transformation and give the matrix of transformation.
[KUK, 2008; NIT Kurukshetra, 2002]
Solution: On comparing the given quadratic with the general quadratic ax2 + by2 + cz2 + 2fyz
+ 2gzx + 2hxy, the matrix is given by
Matrices and Their Applications 73
a h g 3 1 1
A = h b f = 1 3 −1
g f c 1 −1 3
The desired characteristic equation becomes
3−λ 1 1
A − λI = 1 3 − λ −1 = 0,
1 −1 3 − λ
2 1 1 x1
For λ = 1, we have 1 2 − 1 y1 = 0
1 −1 2 z
1
or
}
2x1 + y1 + z1 = 0 , i.e.
x1 + 2y1 − z1 = 0 −1
x1
− 2 1
y
+ 2 4
z
= 1 = 1 =k
−1
x1 −k −1
y1 = k = 1
∴ z k 1
1
−1 1 1 x
Similarly for λ = 4, 1 −1 −1 y = 0,
1 −1 −1 z
1 1
X
We have two linearly independent vectors 2 = 1 , X = 0
0 1
3
As the transformation has to be an orthogonal one, therefore to obtain ‘P’, first divide
each elements of a corresponding eigen vector by the square root of sum of the squares of its
respective elements and then express as [X Y Z]
1 1 1
3 2 2
1 1
Hence the matrix of transformation, P = 0
3 2
1
0
1
3 2
Example 47: Discuss the nature of the quadratic 2xy + 2yz + 2zx without reduction to
canonical form.
74 Engineering Mathematics through Applications
0 1 1
Solution: The given quadratic in matrix form is, A = 1 0 1
1 1 0
0 1 1
0 1
Here A1 = 0; A2 = = −1 < 0 ; A3 = 1 0 1 = 2 > 0
1 0
1 1 0
∴ The quadratic is indefinite in nature.
1
A−1 = − (−1)n An −1 + k1An − 2 + …… + kn −1
or kn
Matrices and Their Applications 75
2 –1 1
Example 48: Verify Cayley-Hamilton theorem for the matrix A = –1 2 –1 . Hence
1 –1 2
compute A–1. [KUK, 2005, 2008; Madras, 2006; UP Tech, 2005]
Solution: The characteristic equation, is
2−λ −1 1
| A − λI | = −1 2 − λ − 1 = 0 or λ3 – 6λ2 + 9λ – 4 = 0 …(1)
1 −1 2 − λ
To prove that ‘Cayley-Hamilton’ theorem, we have to prove that
A3 – 6A2 + 9A – 4I = 0
2 − 1 1 2 − 1 1 6 − 5 5
Obtain A2 = −1 2 − 1 −1 2 −1 = − 5 6 − 5 …(2)
1 − 1 2 1 − 1 2 5 − 5 6
22 −21 21
Similarly, A = A × A = −21 22 − 21
3 2
…(3)
21 −21 22
22 −21 −21 6 −5 5
Now A3 − 6A2 + 9A − 4I = −21 22 −21 − 6 −5 6 −5
21 −21 22 5 −5 6
2 − 1 1 1 0 0
+ 9 −1 2 − 1 − 4 0 1 0 = 0 …(4)
1 − 1 2 0 0 1
To compute A–1, multiply both side of by A–1, we get
A2 – 6A + 9I – 4A–1 = 0
6 −5 5 2 −1 1 1 0 0
or 4 A−1 = −5 6 −5 − 6 −1 2 1 + 9 0 1 0
5 −5 6 1 −1 2 0 0 1
3 1 1
1
∴ A−1 = 1 3 1 .
4
−
1 1 3
2 1 1
Example 49: Find the characteristic equation of the matrix 0 1 0 and hence, find the
1 1 2
matrix represented by A8 – 5A7 + 7A6 – 3A5 + A4 – 5A3 + 8A2 – 2A + I.
[Rajasthan, 2005; UP Tech, 2003]
76 Engineering Mathematics through Applications
2−λ 1 1
0 1−λ 0 =0 or λ3 – 5λ2 + 7λ – 3 = 0 …(1)
1 1 2−λ
Further, as we know that every matrix satisfies its own characteristic equation
Hence A3 – 5A2 + 7A – 3I = 0 …(2)
Rewrite, A8 – 5A7 + 7A6 – 3A5 + A4 – 5A3 + 8A2 – 2A + I
as (A8 – 5A7 + 7A6 – 3A5) + (A4 – 5A3 + 7A2 – 3A) + A + I
or A5 (A3 – 5A2 + 7A – 3I) + A(A3 – 5A2 + 7A – 3I) + (A2 + A + I)
On using (2), it nearly becomes (A2 + A + I)
Hence, the given expression (A8 – 5A7 + 7A6 – 3A5 + A4 – 5A3 + 8A2 – 2A + I)
1 2 5
represents the matrix, A = 2 0 3 .
5 3 4
ASSIGNMENT 4
1. Find the eigen values, eigen vectors, modal matrix and the spectral matrix of the matrix
1 0 0
0 3 −1 and hence reduce the quadratic form x12 + 3x22 + 3x32 – 2x2x3 to a canonical
0 −1 3
form. [NIT Kurukshetra, 2004; Andhara, 2000]
2. Write down the quadratic form corresponding to the matrix
1 2 5
A = 2 0 3
5 3 4
[HINT: Quadratic Form = X'AX]
3. Reduce the quadratic form 8x2
+ +7y2 3z2
– 12xy – 8yz + 4zx into a ‘sum of squares’ by an
orthogonal transformation. State the nature of the quadratic. Also find the set of values
of x, y, z which will make the form vanish. [NIT Kurukshetra, 2009]
2 −1 1
4.Verify Cayley Hamilton theorem for the matrix A and find ifs inverse if A = −1 2 −1 .
1 −1 2
a1 + i b1
a + i b2
A= 2
……… then
Further, if ,
an + i bn
− 2 1 2
3 3 3
cos θ − sin θ 2 2 1
e.g. (i) sin θ cos θ , (ii) 3 3 3
1 2 2
3 −
3 3
Unitary Matrix: If a square matrix A in a complex field is such that A´ = A–1, then A is called
a unitary matrix. The determinant of a unitary matrix is of unit modulus and thus is non-
singular.
1 1 + i −1 + i 1 1 − i −1 − i
A= A=
e.g. Let 2 1 + i 1 − i so that 2 1 − i 1 + i
Θ
1 1 − i 1 − i
A' = A =
and 2 −1 − i 1 + i
Θ
1 1 + i 1 + i 1 − i 1 − i
∴ AA =
4 1 + i 1 − i −1 − i 1 + i
1 (1 − i ) + (1 − i ) (1 − i2 ) − (1 − i2 ) = 1 4
2 2
0
= =1.
4 (1 − i2 ) − (1 − i2 )
(1 − i2 ) + (1 − i2 ) 4 0 4
Hermitian Matrix: A square matrix A is said to be Hermitian if A' = A where A denotes the
matrix whose elements are the complex conjugates of the elements of A. [PTU, 2007, 2008]
–
In terms of general elements, the above assertion implies A' = A ( aji = aij or aii = aii ) which
shows that all the diagonal elements are real.
A square matrix A is said to be skew-Hermitian if A´= −A . Whence, the leading diagonal
elements of a skew-Hermitian matrix are either all purely imaginary or zero.
Thus, Hermitian and skew-Hermitian matrices are the generalization in the complex field of symmetric
and skew-symmetric matrices respectively.
1 1 + i 2 + 3i i 1+ i 2 + 3i
1 5 + 4i
e.g. (i) 5 − 4i 2 (ii) 1 − i 2 3 + 4i −1 + i
(iii) 2i 3 + 4i
2 − 3i 3 − 4i 3 3i
−2 + 3i −3 + 4i
78 Engineering Mathematics through Applications
Clearly (i) and (ii) are the examples of two Hermitian matrices of which all the diagonal
elements are real numbers while (iii) is an example of skew-Hermitian as all of its diagonal
element are purely imaginary.
i 7 – 4i –2 + 5i
Example 50: Show that A = 7 + 4i –2 3 + i is a Hermitian.
–2 – 5i 3 – i 4
Solution: Let the transpose A' of square matrix [A] is equal to its conjugate complex, i.e.
A´ = A , then A is said to be the Hermitian matrix.
1i 7 + 4i −2 − 5i
A´ = 7 − 4i −2 3−i
Clearly, −2 + 5i 3 + i
4
each ars = (αrs + iβrs) elements of A´ is equal to the elements ars = (αrs – iβrs) of A .
Hence the matrix A is Hermitian Matrix.
% , stands
Normal Matrices: A square matrix A is called normal if AA´= A´A ; where A ' or A
for conjugate transpose of A. Normal matrices include Diagonal, Real, Symmetric, Real-
Skew symmetric, Orthogonal, Hermitian, Skew-Hermitian or Unitary matrices.
Note: If A is any normal matrix and U is a unitary matrix then U´ AU is normal as:
Let (
U´AU = X then X´ = U´ AU ´ )
= U´A´U Q (U´)´= U
= U´A´U, Q U =U
( )( )
X´X = U´A´U . U´AU (Taking UU ' = I )
Theorem 1: Any square matrix can be expressed uniquely as a sum of Hermitian and
Skew-Hermitian Matrix.
H' =
1
2
( A + A´)´= 12 (A´+ ( A´))´
Matrices and Their Applications 79
1
2
( A´+A ) = H .
=
[³ Transpose of the transpose of a matrix is the matrix itself ]
Hence H is Hermitian,
and S´ =
1
2
( A − A´)´ = 21 ( A´− ( A´)´)
1 1
2(
=A´−A) = − ( A − A´) = −S
2
Hence S is a skew-Hermitian.
Uniqueness: Suppose A = (K + T), where K is Hermitian and T skew-Hermitian
then A´ = K´ + T´ or A´ = K – T [³ K´ = K and T´ = – T by supposition]
1
Adding the two, (A + A´) = 2K or
2
( A + A´)
K=
K = H from defintion of A above.
1
On substsacting (A – A´) = 2T or
2
( A − A´ )
T=
T = S from definition of ‘A’ above.
Hence H and S are unique.
XΘ X = λλ XΘ X
(1 − λλ )XΘX = 0
Hence, either (1 – λλ) = 0 or XΘX = 0
( HX )´ = (λX )´
X´H´ = λX´
Hence XΘ H = λXΘ …(2)
with X´= XΘ as transpose of the conjugate complex of X and HΘ = H, since H is Hermitian.
Also From (1), X´HX = X´λ X
or XΘHX = λXΘX …(3)
Imaginary axis
Skew-Hermitian
(Skew-Symmetric)
iλ Unitary (Orthogonal)
Hermitian (symmetric)
Real axis
λ
Fig. 1.2
Theorem 7: Show that for any square matrix A; (A + AΘ), AΘA are Hermitian and (A – AΘ)
is Skew-Hermitian, where AΘ stands for transpose conjugate of A.
Proof: By definition a square matrix A is said to be Hermitian, if A´= A , i.e., AΘ = A .
Here, (A + AΘ)Θ = AΘ + (AΘ)Θ = AΘ + A
which shows that conjugate transpose of (A + AΘ) is equal to itself. Hence (A + AΘ) is Hermitian.
Likewise, (AAΘ)Θ = (AΘ)Θ AΘ = AAΘ. Hence AAΘ is Hermitian.
82 Engineering Mathematics through Applications
λ1 0 …… 0
0 λ2 …… 0
A X1 , X2 , … , Xn = [ X1 , X2 …Xn ] 0 0 …… 0
… … …… 0
0 0 …… λn
Then, ( A − λI )( A − λI )´ = ( A − λI ) ( A´−λI )
= AA´−AλI − λIA´+λλI
( ) (
= A´A − AλI + −λI A´+λλI )
= ( A´−λI ) A − λI ⋅ ( A´−λI )
= ( A´−λI ) ( A − λI )
= ( A − λI )´( A − λI ) …(2)
Thus (A – λI) is normal
Now, let (A – λI) = B and by hypothesis BX = 0 …(3)
So that ( BX )´( BX ) = 0 …(4)
= X´B
= X´B (B ) = B
³
1 1 1
Example 51: If S = 1 a2 a , where a = e2iππ/3, show that S–1 = S .
1
2
3
1 a a
2π 2π 1 3
a = e2iπ/3 = cos + i sin = − +i
3 2 2
Now ; …(2)
3
4π 4π 1 3
a2 = e4iπ/3 = cos + i sin = − −i
3 2 2
; …(3)
3
1 3 2 1 3
and a = − − i , a = − 2 + i 2 …(5)
2 2
Thus from equations (2) to (5), we see that
a = a2 , a2 = a and a4 = a3 · a = a …(6)
1 1 1 1 1 1
S = 1 a2 a = 1 a a2
Now write, (Using 6) …(7)
1 a a2 1 a
2
a
( a − a2 ) ( a − a2 ) (a − a2 )
∴
S−1 =
1
2 (
3 (a − a )
a − a2 ) ( a2 − 1) ( −a + 1)
( a − a ) ( −a + 1) ( a2 − 1)
2
1 1 1
1
= 1
(a2 − a3 ) (−a4 + a3 )
3 ( a − a2 ) (a − a2 ) (On replacing 1 by a3)
1 (−a4 + a3 ) ( a2 − a3 )
( a − a2 ) (a − a2 )
1 1 1
1
a2 = S
1
S−1 = 1 a
or 3 3
1 a a
2
0 1 + 2i
Example 52: If N = –1 + 2i –1
0 , obtain the matrix (1 – N)(1 + N) , and show that it is
unitary. [KUK, 2008]
0 1 + 2i 1 0
Solution: Let N= and I2 = …(1)
−1 + 2 i 0 0 1
−1 − 2i
( I − N ) = 1 − 2i
1
Then
1 …(2)
1 + 2i a11
( I + N ) = −1 + 2i
1 a12
=
and
1 a21 a22 …(3)
1 4i1 −2 − 4i 1 −4 −2 − 4i
= =
6 2 − 4i 2
4i 6 2 − 4i −4 …(7)
Let (I – N) (1 + N)–1 = U,
then for U to be unitary, we must have U´U = I
−4 −2 + 4i
Thus, from equation (7) obtain U = 1
6 2 + 4i −4
1 −4 2 + 4i
U´=
which implies 6 −2 + 4i −4
1 −4 2 + 4i −4 −2 − 4i
U´U =
Now 6 × 6 −2 + 4i −4 2 − 4i −4
1 4 × 4 + ( 2 + 4i )( 2 − 4i ) −4 ( −2 − 4i ) + ( 2 + 4i ) (−4)
=
36 ( −2 + 4i )( −4) − 4 ( 2 − 4i ) (−2 + 4i )(−2 − 4i ) + 16
1 16 + 4 − 16i2 8 + 16i − 8 − 16i
=
36 8 − 16i − 8 + 16i 4 − 16i2 + 16
1 36 0
= =I
36 0 36
Hence U = (1 – N) (1 + N)–1 is unitary.
Brief about special types of matrices
To any matrix [aij], we call
(i) Symmetric if [aij] = [aij]´ (ii) Skew-symmetric if [aij] = – [aij]´
(iii) Involutary if [aij] = [aij]–1 (iv) Orthogonal if [aij]´ = [aij]–1
( )
−1
(vii) Skew-Hermitian if aij = − aij´ (viii) Unitary if aij = aij ´
…(1)
………………………
am1 ( t ) am2 ( t) … amn ( t )
86 Engineering Mathematics through Applications
Definition 2: The integral of the matrix [A(t)] is a matrix to which we denote as ∫ [A(t)]dt
whose elements are equal to the integrals of the elements of the given matrix:
More precisely,
∫A(t)dt = [∫ aij(t)dt] = [∫A(t)dt] …(7)
The symbol ∫ ( ) dt is sometimes denoted by a single letter, say S, and then we can write
equation (7), like, we did in (5)
S[A] = [SA]
Matrices and Their Applications 87
x1 ( t )
[ x ] = x2 (t ) …(2)
x3 (t )
This is solution matrix or the vector solution of the system (1). Writing the matrix of
derivatives of the solutions:
dx1
dt
dx
dx = 2
dt dt
M …(3)
dxn
dt
Let us write down the matrix of coefficients of the system of differential equations:
dt = ................. ⋅ M .
M ................. …(5)
dxn an1 an2 … ann x3 ( t)
dt
or, more precisely on the basis of the rule for differentiation,
d
x (t ) = [ a ][ x ]
dt
…(6)
88 Engineering Mathematics through Applications
C1
C
x = Cekt with C = 2
M …(10)
Cn
Substituting (9) into (7), viz. the rule for multiplication of matrix by a scalar and the rule for
differentiating matrices, we get both sides as
dt
( e α ) = a eλt α
d λt
…(11)
Whnce we have λα = aλ
or aα – λα = 0 …(12)
The matrix equation (12) can also be written as:
(a – λI)α = 0, …(13)
where I is an identity matrix of order n.
In expanded form, equation (13) is thus:
a11−λ a12 … a1n α1
a21 a22 −λ … a2n α2
=0
…………………… M …(14)
an1 an2 … ann−λ αn
Equation (12) shows that the vector ‘α’, can be transformed by the matrix ‘a’ into a parallel
vector ‘λα’. Hence, the vector ‘α’ is an ‘eigen vector’ of the matrix ‘a’ corresponding to the
‘eigen value’ λ. In scalar form, equation (12) as a system of algebraic equations is thus:
This equation is called the auxiliary equation or characteristic equation and its roots are
called the roots of the auxiliary characteristic equation.
Case I: The roots of the auxiliary equation are real and distinct.
Let λ1, λ2, …, λn be the roots of the auxiliary equation. For each root λi, write the system
of equations (15) and determine the coefficients α1(i), α2(i), …, αn(i). It may be shown that one
of them is arbitrary and be considered equal to unity. Thus, we obtain:
For the root λ1, the solution of the system (10)
x1(1) = α11 eλ1t , 1 λ1t (1) λ1t
2 = α2 e , …… xn = αn e
x(1) (1)
………………………………………………
For the root λn, the solution of the system (10)
x1(n) = α(1n) eλn t , x2(n) = α(2n) eλnt ,…… , xn(n) = α(nn)eλnt
λ1 = α + iβ, λ2 = α – iβ
…(21)
To these roots will correspond the solutions:
xj(1) = αj(1) e(α+ iβ)t , ( j = 1, 2, …, n)
xj(2 ) = αj(2 ) e(α+ iβ) t , ( j = 1, 2,… , n ) …(22)
The coefficients αj(1) and αj(2) are determined from the system of equation (14).
Since the real and imaginary parts of the complex solution are also solutions.
We, thus, obtain two particular solutions:
(
xj−(1) = eαt λ j(1) cos βt + λ j(2 ) sin β t )
x−j (2) = eαt (λ j
− (1) −( 2 )
sin β t + λ j cos β t
) …(23)
−(1) −(2)
where λ j , λ j , λ j , λ j are real numbers determined in terms of α j and α j .
(1) (2) (1) (2)
Appropriate combinations of functions (23) will enter into general solution of the system.
Example 53: Write down in the matrix form of the system and the solution of the system
of linear differential equations:
dx1 dx2
= 2x1 + 2x2 , = x1 + 3x2 .
dt dt
dx1
dt 2 2 x1
dx = 1 3 x
2 2 …(1)
dt
Now the corresponding characteristic equation is
2 − λ 2
1 =0 i.e., λ2 – 5λ + 4 = 0
3 − λ
whence λ1 = 1, λ2 = 4 …(2)
α
Now, formulate matrix equation [A – λI] [α] = 0 with column matrix α = 1
α2
( a11 − λ ) α1 + a12α2 = 0
i.e., a21α1 + ( a22 − λ ) α2 = 0 …(3)
For λ = 1, ( 2 − 1) α(1)
1 + a12 α2 = 0
(1)
α1(1) + ( 3 − 1) α2(1) = 0
Matrices and Their Applications 91
1 1 C eλ1t
x1 1
i.e. =
x2 − 1 1 C2 eλ2t
2
x1 = C1et + C2 e4t
Therefore, we have .
x2 = − C1et + C2 e4t
1
2
Example 54: Write in matrix form the system and the solution of the system of differential
equations
dx1 dx2 dx3
= x1 , = x1 + 2 x2 , = x1 + x2 + 3x3 .
dt dt dt
Solution: In matrix form, the system of equations is written as:
dx1
dt
dx 1 0 0 x1
2 = 1 2 0 x2
dt 1 1 3 x
dx3 3
dt
Let us form the characteristic equation and find its roots,
1−λ 0 0
1 2−λ 0 = 0 , i.e. (1 – λ)(2 – λ)(3 – λ) = 0
1 1 3−λ
whence λ = 1, 2, 3.
Corresponding to λ = l, finding α1(1), α2(1), α3(1) from the system of equations as below:
0 0 0 α1
1 1 0 α2 = 0
1 1 2 α
3
α1(1) + α2(1) = 0
i.e.,
α1(1) + α2(1) + 2α3(1) = 0
From above, we have α3(1) = 0 with α1(1) = 1, α2(1), = –1
Similarly, corresponding to λ = 2, determine α1(2), α2(2), α3(2).
92 Engineering Mathematics through Applications
−α1( 2) = 0
α1( 2) = 0
i.e.,
α1(2 ) + α2(2 ) + α3(2 ) = 0
From above, we find α1(2) = 0, α2(2) = 1, α3(2) = –1
Likewise, corresponding to λ = 3, we determine α1(3), α2(3) , α3(3).
− 2α1( 3 ) = 0
α1( 3) − α2( 3) = 0
We obtain
1 α ( 3) + α ( 3) = 0
2
or α1(3) = 0, α2(3) = 0, α3(3) = 1
Consequently, in the matrix form, the solution of the given system of equations can be
written as:
x1 1 0 0 C1e
t
x2 = − 1 1 0 C2 e2t
x 0 −1 1 3t
3 C3 e
x1 = C1et
or x2 = −C1et + C2 e2t
x3 = −C2 e2t + C3 e3t
ASSIGNMENT 5
Solve the following system of linear differential equations by the matrix method:
dx1 dx2
+ x2 = 0, + 4x1 = 0
dt dt
B. Matrix Notation for a Linear Equations of Order n
Suppose we have an nth order linear differential equation with constant coefficients:
dn x dn−1x dn− 2 x
n
= an n −1 + an −1 n − 2 + … + a1x …(1)
dt dt dt
Later we will observe that this way of numbering the coefficients is convenient.
Take x = x1
dx1
= x2
dt
dx2
= x3
dt
………………
dxn −1
and = xn
…(2)
dt
dxn
= a1x1 + a2 x2 + … + an xn
dt
Matrices and Their Applications 93
dx1
dt
dx 0 1 0 … 0 x1
2
dt 0 0 1 … 0 x2
M = . . . … . M
dxn −1 0 0 0 … 1 xn −1
a a a …(4)
… an xn
dt 1 2 3
dxn
dt
d
or, briefly [x] = a∗ ⋅ [x ] …(5)
dt
d2 x dx
Example 55: Write the equation 2
=p + qx in matrix-form.
dt dt
dx1 dx2
Solution: Put x = x1, then = x2 and = px2 + qx1
dt dt
The system of equation in matrix form looks like this:
dx1
dt 0 1 x1
dx = q p x
2 2
dt
C. Solving System of Linear Differential Equations with Variable Coefficients by the
Method of Successive Approximations
Let it required to find the solution of the system of linear differential equations with
variable coefficients.
dx1
= a11 ( t ) x1 + a12 (t ) x2 + … + a1n (t ) xn
dt
dx2
= a21 (t ) x1 + a22 (t ) x2 +… + a2n ( t ) xn
dt
………………………………………… …(1)
dxn
= an1 ( t ) x1 + an2 ( t ) x2 + … + ann ( t ) xn
dt
that satisfy the initial conditions.
94 Engineering Mathematics through Applications
x10
x
[ x0 ] = M20 …(3)
xn0
then the system of equations (1) with initial conditions (2), can be written as:
d
[x ] = a (t ) ⋅ [x ] …(4)
dt
Here, [a(t)] is again coefficients matrix of the system. We will solve the problem by the
method of successive approximations.
To get a better grasp of the material that follows, let us apply to the method of successive
approximations first to a single linear equation of the first order.
It is required to find the solution of the single equation
dx
= a (t) x …(5)
dt
for the initial conditions, x = x0 for t = t0
On assumption that a(t) is a continuous function, the solution of the differential equation
(5) with initial conditions, reduces to the integral equation
x = x0 + ∫t a ( z ) x ( z ) dz
t
0
…(6)
We will solve this equation by the method of successive approximations:
x1 = x0 + ∫t a ( z ) x0 dz
t
0
x2 = x0 + ∫ ( z ) x1 ( z ) dz
t
a
t0
……………………………
…(7)
xm = x0 + ∫t a ( z) xm − 1 ( z ) dz
t
0
……………………………
We introduce the operator S, (the integration operator)
S( ) = ∫tt ( ) dz
0
…(8)
Using the operator S, we can write the equations (101) as follows:
x1 = x0 + S ( ax0 )
((
x2 = x0 + S ( ax1 ) = x0 + S a x0 + S ( ax0 ) ))
(( ((
x3 = x0 + S a x0 + S a x0 + S ( ax0 ) ))))
……………………………………………… …(9)
(( ( (
xm = x0 + S a x0 + S x0 + S x0 + S ( ax0 ) ))))
Matrices and Their Applications 95
Expanding, we get
xm = x0 + Sax0 + Sa Sax0 + SaSaSax0 + … + SaSaSa …Sax0
1444444444 424444444444 3
m times
Taking x0 outside the brackets (x0 constatn), we obtain
xm = {1 + Sa + SaSa 4
14444 +… + SaSa …Sa
244444 3}x0 …(10)
m times
It has been proved that if a(t) is a continuous function, then the sequence [xm] converges.
The limit of this sequence is a convergent series:
x = [1 + Sa + SaSa + …]x0 …(11)
Note: If a(t) = const., then formula (11) assumes a simple form. Indeed, by (8) we can write
Sa = aSI = a (t − t0 )
2
SaSa = a2S (t − t0 ) = a2
(t − t0 )
2
………………………………
…(12)
SaSa …3
Sa = am
(t − t0 )m
1424
m!
m times
0 { 0
z
( z
0
) }
x ( t) = [ x0 ] + ∫t a ( z1 ) [ x0 ] + ∫t 1 a ( z2 ) [ x0 ] + ∫t 2 a ( z3 ) (…) dz3 dz2 dz1 + …
t
S [a] S [a]
( t − t0 )
2
= [a ]2
2!
S [a] S[ a] S [a] =
( t − t0 ) 3
3
ANSWERS
Assignment 4
1 0 0
1. 1,2,4; (1, 0, 0), (0, 1, 1), (0, 1, – 1,); 0 1 1 ; x12 + 2 x22 + 4 x32
0 1 −1
2. x12 + 4x32 + 4x1x2 + 10x1x3 + 6x2x3
Assignment 5
x1 = c1e–2t + c2e2t, x2 = 2c1e–2t – 2c2e2t