Sie sind auf Seite 1von 3

An Important Theorem :

Every representation of a finite group is equivalent to a unitary representation.


Schur’s Lemma 1 :
In an irreducible representation T(G) of a finite group G, if all the matrices T(Gi)
(Gi  G) commute with a matrix P, then P is a scalar matrix (i.e., multiple of the identity matrix).
Proof 1 :
Let T(Gi)nn P = P T(Gi)nn for an n-dimensional representation T(G). LHS implies that
P must have n rows, while RHS implies that it must have n columns. Hence P must be an nn
matrix.
We shall assume that the representation T(G) is unitary (since we have the assurance of
the above mentioned Theorem). This is only required for us to assume that each T(Gi) has a
complete set of eigen vectors. As P commutes with T(Gi), it will also have a set of n linearly
independent eigen vecors, say {xn}. Let one of them be x1 such that :
Px1 = 1 x1
We shall show that 1 is n-fold degenerate. Let 1 be m-fold degenerate, where m < n,
i.e. {xi  i = 1, 2, . . . m} are the m linearly independent eigen vectors, corresponding to the eigen
value 1. If x1 is one such eigen vector, then, Gi G : T(Gi) x1 is also an eigen vector
corresponding to the same eigen value, because
P T(Gi) x1 = T(Gi) P x1 = T(Gi) (1x1) = 1 T(Gi) x1
Hence T(Gi) x1 can be expanded in terms of the basis {xi  i = 1, 2, . . . m} and we obtain an
m-dimensional representation of G. However, we have assumed that the representation T(G)
is irreducible. So, m cannot be less than n.
Thus, the eigen value 1 must be n-fold degenerate, i.e., all the eigen values of P must equal 1,
implying that :
P = 1 I
Proof 2 :
Let us first restrict ourselves to the special case, where P is Hermitian. We require this
only to claim that P is diagonalizable :
U–1 P U = PD =   0 0 
 0  0 
- - - - - - - -
 0 0 n 

Gi G : P T(Gi) = T(Gi) P  U–1 P T(Gi) U = U–1 T(Gi) P U


 U–1 P U U–1 T(Gi) U = U–1 T(Gi) U U–1 P U
 PD T(Gi) = T(Gi) PD ,
where T(G) is an equivalent representation of G
 k [PD]ik [T(Gi)]km = k [T(Gi)]ik [PD]km
 k i ik [T(Gi)]km = k [T(Gi)]ik k km
 i [T(Gi)]im = [T(Gi)]im m
We are going to prove that all i’s must be the same. If not, let’s suppose that the first
p eigen values are equal (p < n). [We can bring the equal eigen values in the front by adjusting
columns of the diagonalizing matrix U.]
Thus i = m, if i, m  p, but i  m, if i  p, but m > p
 [T(Gi)]im = 0, if i  p, but m > p
This shows that the representation T(G) is reducible and so is T(G). However, we assumed
T(G) is irreducible. Hence all the  i’s must be equal, say  which means :
P = 1 I
Now we generalize to the case where P is non-Hermitian.
Gi G : P T(Gi) = T(Gi) P  T(Gi)† P† = P† T(Gi)†,
but T(Gi)† = T(Gi)– 1 [as T(Gi) is unitary] = T(Gi– 1)
[as Gi  T(Gi) and Gi  T(Gi) are homo-morphisms]
 Gi G : T(Gi– 1) commutes with P†
 Gi G : T(Gi) commutes with P†
Now any matrix P may be expressed as : H1 + iH2 = (P + P†)/2 + i (P – P†)/2i,
where both (P + P†)/2 and (P – P†)/2i are Hermitian.
Both commute with T(Gi), hence both are of the form : I and I.
 P = (+ i) I
Schur’s Lemma 2 :
If T(G) and S(G) are two irreducible representations of a finite group G and M is a
matrix such that Gi G : T(Gi) M = M S(Gi), then either M = 0 (i.e., null matrix), or, M is
invertible [which means : T(G) and S(G) are equivalent].
Proof :
Gi G : T(Gi) M = M S(Gi) ----- (1)
 M† T(Gi)† = S(Gi)† M†
 M† T(Gi)– 1 = S(Gi)– 1 M† [as T(G) and S(G) are unitary]
 Gi G M† T(Gi– 1) = S(Gi– 1) M† [as Gi  T(Gi), S(Gi) is a homo-morphism]
 Gi G M† T(Gi) = S(Gi) M† ----- (2)
Multiplying (1) by M† from left : M† T(Gi) M = M†M S(Gi)
Multiplying (2) by M from right : M† T(Gi) M = S(Gi) M†M
 M†M S(Gi) = S(Gi) M†M
Thus, M†M commutes with S(Gi) Gi G  M†M = I [by Schur’s Lemma -1]
If  = 0 : M†M = 0  Tr (M†M) = 0
 i k (M†)ik (M)ki = 0
 i k (M*)ki (M)ki = 0, i.e., i, k Mki2 = 0
 Mki = 0  i, k, i.e., M = 0
If   0 : M M = I  det (M†M) = n , where M†M is an nn matrix

If T(Gi)mm M = M S(Gi)nn , LHS implies that M must have m rows, while RHS
implies that it must have n columns. Hence M must be an mn matrix.
If m  n, M–1 obviously, cannot exist.
If m = n, det (M†M) = n  det (M†) det (M) = n
 det M2 = n  0, i.e. M is invertible
Hence T(Gi) M = M S(Gi)  S(Gi) = M– 1 T(Gi) M

Das könnte Ihnen auch gefallen