Beruflich Dokumente
Kultur Dokumente
Fivel
Chapter 0
Survey of Linear Algebra Needed for 622 and 623.
An N -dimensional linear vector space S over the field of complex numbers can be defined
in the following way: The space contains a set of elements denoted |e1 i, · · · , |eN i called a
“basis”. Given any set of N complex numbers (α1 , · · · , αN ) we construct an element |vi
of S represented by
N
X
|vi = αj |ej i. (0.1)
j=1
(Note: In the following the symbol |v 0 i is to be read as (0.1) with primes on the α’s.)
If η, ξ are any pair of complex numbers, and |vi, |v 0 i are elements of S, we form linear
combinations by the rule
N
X
0
η|vi + ξ|v i = (ηαj + ξαj0 )|ej i (0.2)
j=1
Linear functionals on S are linear maps of elements of S into complex numbers. The
set of linear functionals on S also forms an N-dimensional complex vector space which
is constructed as follows: With each basis element |ek i define its dual indicated by hek |
as the linear map which takes |ek i to unity, but takes all |ej i with j 6= k to zero. Thus
introducing the Kronecker symbol:
N
X N
X
hek |vi = hek | αj |ej i = αj hek |ej i = αk . (0.4)
j=1 j=1
We define the dual of an arbitrary element |vi given by (0.1) as the map
N
X
hv| = α∗j hej |, (0.5)
j=1
0- 1
which means that its action on any element |v 0 i is given by:
N
X
0
hv|v i = αj∗ α0j , (0.6a)
j=1
∗
in which means complex conjugate. Note that
N
X
hv|vi = |αj |2 ≥ 0. (0.6b)
j=1
One can think of (0.6) as defining a complex scalar product between elements |vi and |v 0 i
of S. Note, however, that unlike the real scalar product it is not symmetric, i.e.
We can construct linear operators on S which map S to itself. These are formed as follows:
Let |vi, |v 0 i be any elements of S. We form primitive operators called “dyads” represented
by |vihv 0 | whose action on any element |wi is given by
0- 2
Of special importance is the operator denoted “I” in which Vjk = δjk , i.e.
X
I= |ej ihej |. (0.11a)
j
* For infinite dimensional matrices there is a difference in the meaning of these two terms
0- 3
√
Exercise 1: Prove the following:
For every unitary operator there is a transformation from the e-basis to another basis
defined by:
|fj i = U|ej i, j = 1, · · · , N. (0.21)
Notice that:
hfk |fn i = hek |U † U|en i = hek |en i = δkn (0.22)
One readily checks that the unit operator can be represented also as
N
X
I= |fj ihfj |. (0.23)
j=1
Hence
N
X N
X
|vi = |fj ihfj |vi = βj |fj i (0.24)
j=1 j=1
in which
βj = hfj |vi. (0.25)
√
Exercise 2: Verify that the trace is invariant under a change of basis.
0- 4
When we compare (0.24) with (0.1) we see two different expressions for the same vector
|vi, one in the e-basis and one in the f-basis. Thus expressions such as
α1 β1
. .
|vi “ = ” .. or |vi “ = ” ..
αn βn
must be handled with care. We shall refer to |vi as a “state” or a a “state vector” or a
“ket”, and the column of components in the e-basis as the e-representation of the state.
Similarly we distinguish between an operator A and the e-representation of the operator
by a matrix A which will be different in different bases.
|hx̄|ȳi| = |hx|yi|
we have v
uN
uX
||x|| = t |xj |2 . (0.29)
j=1
0- 5
Writing a similar expression for |yi we have:
1/2
N
X
D(x, y) = ((Re(xj ) − Re(yj ))2 + (Im(xj ) − Im(yj ))2 ) , (0.30)
j=1
which shows that D is just the Euclidean distance in a real space of 2N dimensions.
A function δ(x, y) ≥ 0 on a set S with elements x, y, z, · · · is said to be a metric if
δ(x, y) = 0 ⇐⇒ x = y, (0.31)
One sees that any anti-linear operator can be constructed as follows: Choose a distin-
guished basis |e1 i, · · · , |eN i, and let A be a linear operator. Then define
0- 6
We write formally:
B = KA, (0.38)
where K is the complex-conjugation operator. It is important to understand that (0.38)
is meaningful only with respect to a selected basis. If the linear operator A in (0.38) is
unitary, then B is said to be an anti-unitary operator.
Theorem 3: Unitary and anti-unitary maps are isometries with respect to D.
√
Exercise 4: Prove Theorem 3. You must show that the map is one-one and preserves
the norm of any vector.
This theorem naturally raises the question of whether all isometries are unitary or anti-
untiary maps. The answer is contained in the following important theorem:
Theorem 4 - Wigner’s Theorem: Let |xi → |x̃i be an isometry with respect to D. Then
there exists a choice of unimodular phases eiδ(x) such that the map |xi → |x̄i ≡ eiδ(x) |x̃i
is unitary or anti-unitary.
I will give the proof only for a two dimensional space which contains all of the essential
ideas.
Proof for N = 2:
Choose a basis |1i, |2i and let their images under the isometry be |1̃i, |2̃i respectively. Then
by the Corollary to Theorem 2 above the images form an orthonormal basis. Then the
image of any vector |xi = α1 |1i+ α2 |2i is a vector |x̃i = α
f1 |1̃i+ α
f2 |2̃i. But since αj = hj|xi
and α
fj = hj̃|x̃i, Theorem 2 gives:
|f
αj | = |αj | (0.39)
Now consider the state |ri ≡ |1i + |2i. By (39) we must have unimodular phases eiδj , j =
1, 2 such that
|r̃i = eiδ1 |1̃i + eiδ2 |2̃i. (0.40)
Thus if we define an orthonormal basis:
we have
|1i + |2i → |1̄i + |2̄i. (0.42)
Now consider the vector |ui ≡ |1i + i|2i. Once again we deduce that there must be
unimodular phases eiλj , j = 1, 2 such that
0- 7
whence √ p
2 = |1 + i| = |eiλ1 + ieiλ2 | = 2 + 2 sin(λ1 − λ2 ), (0.45)
whence λ1 − λ2 = 0 or π. In the first case we have
To summarize the result so far we have shown that the phases can be chosen such that:
|ji → |j̄i, |1i + |2i → |1̄i + |2̄i, and |1i + i|2i → eiλ1 (|1̄i ± i|2̄i). (0.47)
Now for |xi = α1 |1i + α2 |2i and |x̄i = α1 |1̄i + α2 |2̄i, we have, taking scalar products of
|xi with |ri, |ui; and taking scalar products of |x̄i with their images (0.47), we obtain from
Theorem 2:
|αj | = |αj |, j = 1, 2 (0.48a)
|α1 + α2 | = |α1 + α2 | (0.48b)|
and either
|α1 + iα2 | = |α1 + iα2 | (0.48c)
or
|α1 + iα2 | = |α1 − iα2 |. (0.48d)
√
Exercise 5: Show that if α1 , α2 are given, then these relations uniquely determine α1
and α2 up to an arbitrary common unimodular factor.
In the case (0.48a,b,c)it is seen from Exercise 4 that the general solution is:
αj = eiλ αj , j = 1, 2, (0.49a)
where eiλ is a unimodular factor which may be different for different x. If we ignore the
phase factor eiλ , (0.49a) states that the transformation is linear, and (49b) states that it
is anti-linear. Equation (0.48a) insures that we thus have a unitary transformation in the
one case and an anti-unitary transformation in the other. ¥
0- 8
A6. The Space of Pure States. The Hilbert Sphere
It turns out (see chapter 1) that quantum mechanical states are represented by rays in
the complex vector space. Thus all vectors which differ only by a (complex) multiple will
represent the same state. If |xi is a unit vector, i.e. hx|xi = 1, we still have all vectors of
the form eiλ |xi representing the same state. Hence it is useful to represent the state by a
dyad π (x) defined by:
π (x) = |xihx|, (0.50)
which does not change when the unimodular factor is included.
Definition: The space P, referred to as the space of pure states is the set of operators
π (x), as |xi ranges over the set of unit-vectors of S.
Notice that because |xi is a unit vector
π (x))2 = π (x),
(π (0.51)
π (y) = 0 ⇐⇒ hx|yi = 0.
π (x)π (0.51)
In the two dimensional case, the space of states has an interesting and important geomet-
rical interpretation: The most general two component complex unit vector can be written
in the form: µ iφ/2 ¶
iλ e cos θ2
e , 0 ≤ θ ≤ π, −π < φ ≤ π, (0.52)
e−iφ/2 sin θ2
with arbitrary uni-modular factor eiλ . Hence there is a one-one correspondence between
π (x) and a pair of angles (θ, φ) whose range is the same as the latitude and azimuth on a
sphere. In fact we have the representation:
µ ¶
cos2 θ2 1 iφ
2
e sin θ
|xihx| ⇐⇒ . (0.53)
1 −iφ
2
e sin θ sin2 θ
2
This sphere is known as the Poincaré sphere. For more than two dimensions the corre-
sponding construction leads to a higher dimensional sphere known in general as a Hilbert
sphere. The connection of the space of states with the geometry of the sphere becomes
particularly evident when we compute the trace metric (see equation 0.35 above). We have
p
π (x1 ) − π (x2 )|| =
d(x1 , x2 ) = ||π 1 − p(x1 , x2 ) with p(x1 , x2 ) ≡ |hx1 |x2 i|2 . (0.54)
But µ ¶
i(λ2 −λ1 ) θ1 θ2 i(φ2 −φ1 ) θ1 θ2
hx1 |x2 i = e cos cos +e sin sin ; (0.55)
2 2 2 2
|hx1 |x2 i|2 = | cos 12 θ1 cos 12 θ2 + ei(φ2 −φ1 ) sin 12 θ1 sin 12 θ2 |2 =
0- 9
1
2 {1 + cos θ1 cos θ2 + cos(φ2 − φ1 ) sin θ1 sin θ2 } = cos2 (Ψ(x1 , x2 )/2). (0.56)
where Ψ(x1 , x2 ) is the arc of great circle joining the points (θ1 , φ1 ) and (θ2 , φ2 ).
√
Exercise 6: Verify the last step. Hint: Express the coordinate vectors in three dimen-
sional spherical coordinates and compute the ordinary scalar product.
Thus
Ψ(x1 , x2 )
d(x1 , x2 ) = sin . (0.57)
2
which is half the chord length joining the points corresponding to x1 and x2 on the Poincaré
sphere.
0 - 10
Proof: If |xi ∈ ν(A), i.e. A|xi = 0, then AA† |xi = A† A|xi = 0 so that A† |xi ∈ ν(A).
But A† |xi ∈ R(A† ) which is orthogonal to ν(A) by the last theorem and hence A† |xi = 0.
Hence |xi ∈ ν(A† ). Hence ν(A) ⊆ ν(A† ). Interchanging roles of A and A† it follows that
ν(A) = ν(A† ).¥
By adapting the basis to the spectral decomposition it is evident that the matrix repre-
0 - 11
sentation of a normal matrix in that basis will look like the following:
λ 1 I1 0 ···
0 λ 2 I2 0 ···
Λ=
.. .. .. ,
(0.60)
. . .
0 ··· 0 λ k Ik
in which the λj ’s are the eigenvalues and the Ij ’s are unit matrices of dimension equal
to that of the corresponding eigenmanifold. Since the transformation from one basis to
another is effected by a unitary matrix, there will exist a unitary matrix U that transforms
the matrix A representing A in a given basis into diagonal form. Thus we will have:
The generalization of the ideas developed above for finite dimensional complex spaces to
infinite dimensional spaces is non-trivial. This should be clear from the fact that the
determination of eigenvalues is no longer a matter of finding the roots of a polynomial.
Questions of convergence also arise that were not present for finite dimensional spaces, e.g.
in determining the length of a vector.
Perhaps the most dramatic difference is that whereas all finite dimensional complex vector
spaces of the same dimension have the same structure, there are essentially different kinds
of infinite dimensional spaces. Fortunately, the infinite dimensional space we need in
quantum mechanics is the simplest type — one that has a countably infinite number of
dimensions or one that can be obtained from such a space by a limiting process. The former
is called a Hilbert space, but following the customary abuse of language by physicists, I
will use this term for the latter also.
We begin as in the finite dimensional case with a basis |ni, where n ranges over a countably
infinite set. Usually we take the set to be n = 0, 1, · · · or n = 0, ±1, ±2, · · ·. Sums will be
over this set. Define H as the set with elements
X
|vi = αn |ni, (0.62a)
n
0 - 12
in which the α’s are complex numbers such that
X
|αn |2 < ∞. (0.62b)
n
with equality if and only if one function is a constant multiple of the other.
This is the analogue of a familiar fact about vectors a, b in finite dimensional real spaces,
namely a · b = |a||b| cos θ =⇒ |a · b| ≤ |a||b| with equality if and only if one vector is a
scalar multiple of the other.
√
Exercise 14: Prove Theorem 13. Hint: Write |ri = |fi + λ|gi where λ is a (complex)
scalar. Find the minimum of hr|ri by differentiating with respect to λ or λ∗ . Then use
hr|ri ≥ 0.
Here and in the following integrals over x are on [−L/2, L/2]. Let H be a Hilbert space
with a basis {|ni}, and for each x ∈ [−L/2, L/2] define the linear functional hx| by:
0 - 13
From the identity: Z
−1
L e2πi(n−m)x/Ldx = δnm , (0.68)
so that
hn|f i = fn , (0.72a)
we see from (0.67) that
hx|f i = f(x). (0.72b)
Thus |f i is in the domain of the linear functional hx|. Since
0 - 14
√
Exercise 15: Let f0 = 0, fn = 1/|n|, |n| > 0,. Show that |fi ∈ H but hx|f i is infinite
for x = 0.
X X
hx|xi = |hx|ni|2 = L−1 1 = ∞. (0.76)
n n
Formally we have
X X 0
hx0 |xi = hx0 |nihn|xi = L−1 e2πin(x −x) (0.78)
n n
Z Z
0
dxhx |xif (x) = dxhx0 |xihx|fi = hx0 |I|f i = hx0 |fi = f(x0 ). (0.79)
Thus for any f for which |f i is in the domain of hx| for all x in the interval we see that
hx0 |xi is well defined when integrated against f . Such objects are called distributions and
the functions against which they can be integrated are called test functions. We write:
since from (0.78) the dependence is on the difference of x and x0 . Equation (0.79) then
reads:
Z
dxδ(x0 − x)f (x) = f(x0 ). (0.81)
√
Exercise 16: Verify the following properties of the δ function: (a) δ(x) is even. (b)
δ(ax) = |a|−1 δ(x). (c) Suppose that the class of test functions consists of functions with
continuous derivatives of all orders that vanish at the end points of the integration interval.
Obtain a suitable definition for the n0 th derivative of the delta function. (Hint: Integrate
by parts.)
0 - 15
√
Exercise 17: Verify that the functions
1 ²
f² (x) ≡ , ² > 0, (0.80)
π x + ²2
2
graphs of which are displayed above for three successively smaller ² values, will produce
the effect of the δ-function when ² → 0.
Let us extend the analysis of the last section to allow the x-interval to become arbitrarily
large. We write the function
so that the spacing 2π/L between adjacent k-values becomes arbitrarily small. In the limit
as L → ∞ we can then replace sums over n by an integral over k, i.e.
X Z
L
→ dk, (0.82)
n
2π
so that
1
hx|ki = √ eikx , (0.84)
2π
0 - 16
and hence Z Z
X
dx|xihx| = I = |nihn| → dk|kihk|, (0.85)
n
Z Z
−1/2
hx|fi = hx|I|f i = dkhx|kihk|fi = (2π) dkeikx hk|f i, (0.86a)
Z Z
−1/2
hk|fi = hk|I|fi = dxhk|xihx|f i = (2π) dke−ikx hk|f i. (0.86b)
Writing
e = hk|fi,
f(x) = hx|fi, f(k) (0.87)
one sees that (0.86) is just the familiar Fourier integral theorem.
√
Exercise 18: Show that
Z
0 1 0
δ(x − x ) = dkeik(x−x ) . (0.88)
2π
so that
X|x0 i = x0 |x0 i, K|k0 i = k0 |k0 i., (0.90)
Now let f (x) = hx|f i for some state |fi be differentiable. We have
Z Z
d d d −1/2 d
f(x) = hx|f i = dkhx|kihk|f i = (2π) dk eikx hk|fi =
dx dx dx dx
Z Z
−1/2 ikx
(2π) dk(ik)e hk|f i = dk(ik)hx|kikhk|f i = hx|iK|f i (0.91)
Thus:
d
hx|K|f i = −i hx|fi. (0.92a)
dx
√
Exercise 19: Show that
d
hk|X|f i = i hk|f i. (0.92b)
dk
d
hx|XK|fi = −ix f (x). (0.92c)
dx
d
hx|KX|fi = −i (xf(x). (0.92d)
dx
0 - 17
hx|[X, K]|fi = ihx|fi. (0.92e)
Equation (0.92e) is the very important result:
√
Exercise 20: For any function V (x), any constant α, and any state |ψ i with ψ (x) =
hx|ψ i show that: Z
V (X) = dxV (x)|xihx|, (0.94a)
0 - 18
Hence up to terms of order ²2 we can write
U = e²A . (0.99)
U N = (e²A )N = eN ²A . (0.100)
Thus we see that a symmetry that can be built up by a succession of infinitesimal trans-
formations will be described by a unitary transformation of the form:
Unfortunately most physicists prefer to work with hermitian operators rather than anti-
hermitian ones so it is customary to take advantage of the fact that any anti-hermitian
operator A can be written as
The price one pays for having a hermitian B in the exponent instead of an anti-hermitian
A is that one has to put up with i’s that can be a big nuisance.
Another advantage of using anti-hermitian operators is seen from the following exercise:
√
Exercise 22: Show that the commutator of two anti-hermitian operators is anti-
hermitian. Show that the commutator of two hermitian operators is anti-hermitian. Show
that the commutator of a hermitian and an anti-hermitian operator is hermitian.
Consider the Hilbert space of square integrable functions on the real line R, i.e. our states
|ψ i have the x-representation:
Z ∞
hx|ψ i = ψ (x), dx|ψ (x)|2 < ∞, (0.104a)
−∞
0 - 19
Now define operations
Tx0 |xi = |x + x0 i (0.105a)
T̃k 0 |ki = |k + k0 i (0.105b)
whence
Z Z Z
0
hx|Tx0 |ψ i = dyhx|Tx0 |yihy|ψ i = dyhx|x +yihy|ψ i = dyδ(x−x0 −y)ψ (y) = ψ (x−x0 ).
(0.106)
Thus e.g. if ψ (x) is a Gaussian with its peak at x = xo , then ψ (x − x0 ) is a Gaussian of
the same width but with its peak at x = xo + x0 . Thus Tx0 translates the x-representation
to the right by x0 . Similarly T̃k0 translates the k-representation to the right by k0 .
Thus 0
T̃k0 |xi = eik x |xi. (0.107)
It follows that:
0 0
T̃k0 Tx0 |xi = eik (x+x ) |x + x0 i
and 0 0
Tx0 T̃k0 |xi = eik x |x + x0 i
Thus
T̃k Tx = eikx Tx T̃k . (0.108)
√
Exercise 23: Show that Tx and T̃k are unitary operators.
√
Exercise 24: Show that the set of operators Tx have the following properties:
A set of quantities with these properties is called a group. In the case of the T operations
we also have
Tx1 Tx2 = Tx2 Tx1 , ∀x1 , x2 . (0.110)
0 - 20
A group with this property is said to be commutative or abelian.
The group of operators Tx , x ∈ R is called the translation group on the real line R. The
unimodular quantities χk (x) = eikx which satisfy
One sees that the multiplication by characters forms a group which is essentially the
translation group on k-space produced by the operators T̃k . (This pairing of an abelian
group and its character group is called “Pontryagin duality”.)
The set of translations on k-space together with the translations on x-space form a larger
group which contains each of them as a subgroup. Note carefully that because of (0.108)
this larger group is no longer commutative. This group is called the Weyl-Heisenberg group
and its non-commutativity will turn out to express the essential difference between classical
and quantum mechanics.
√
Exercise 25: Prove with attention to detail that this is indeed a group.
Now let us observe that there is a simple relationship between the operators of the Weyl-
Heisenberg group and the operators X and K discussed earlier.
We have
0 0
hk|e−ix K |xi = e−ikx hk|xi = hk|x + x0 i.
Thus
Tx = e−ixK . (0.112a)
√
Exercise 26: Show that
T̃k = eikX . (0.112b)
There is another intersting and revealing way of getting (0.112a): Let |ψ i be any vector
and put ψ (x) = hx|ψ i. Then
X∞ X∞
−ix0 K (−ix0 )n n (−ix0 )n dn
hx|e |ψ i = hx|K |ψ i = (−i)n n hx|ψ i =
n=0
n! n=0
n! dx
X∞
(−x0 )n dn
n
ψ (x) = ψ (x − x0 ). (0.114)
n=0
n! dx
0 - 21
0.15 Infinitesimal Transformations
Much of physics deals with continuous transformations of one sort or another. Such trans-
formations can be built up from a large number of small transformations. The simplest
example is the one-parameter sequence of transformations expressed by:
U (τ ) = eτ A (0.115)
which converges for any finite rank matrix. For operators that are represented by infinite
matrices A it is easy to show that this is still the case provided ||A|| < ∞ where the norm
||A|| is defined as the largest value of the length ||Av|| of the vector Av as v runs over all
unit vectors. We shall assume that this is the case in the following.
eτ A = (e(τ/n)A )n (0.117)
where I is the n × n identity matrix. Note that each of the factors can be made as close as
we wish to the identity. We therefore refer to A as an infinitesimal generator for the one
parameter group U (τ ).
In the case of non-abelian groups the situation is more complicated. Consider for example
the set of n × n matrices of the form
in which A1 , A2 are fixed n × n matrices and τ1 , τ2 are real variables. If the τ ’s are both
near zero U will be near the identity. But one can go from the identity in various ways. Of
course if it happens that A1 and A2 commute then the exponential factorizes and we have
U (τ1 , τ2 ) = U (τ1 , 0)U (0, τ2 ). But in general this will not be the case and the expansion near
0 - 22
the identity is more complicated. To deal with this we must develop some mathematical
techniques:
Definition: The Lie algebra L generated by a set {A} = {A1 , A2 , · · · , Ak } of n × n matrices
is the smallest set of matrices which contains {A} and is closed with respect to forming
linear combinations and commutators of its elements.
then the Lie algebra generated by A1 , · · · , Ak contains the iterated commutators of any
pair of its elements.
It may or may not turn out that all of the elements of L can be written as a linear
combination of some particular finite set of elements. If all elements are linear combinations
of n but no more than n elements we say that L is of dimension n. A set B1 , · · · , Bn of
such elements is said to be a basis of L. If the dimension n is finite we say that the algebra
closes.
k
If A1 , · · · , An are such that constants Cij , i, j, j = 1, · · · , n exist for which
n
X
k
[Ai , Aj ] = Cij Ak (0.121)
k=1
then clearly L has finite dimension. We call the C 0 s the structure constants of L.
√
Exercise 28: In each of the following some commutation relations are given. Show
that each of the Lie algebras generated are of finite dimension and determine the structure
constants:
(a) J1 , J2 , J3 satisfy
(b) Let X, K be the operators defined earlier that satisfy [X, K] = iI.
0 - 23
√
Exercise 29: Same as (d) but let the algebra also contain some combination of of X’s
and K’s of degree greater than 2, e.g. X 3 . Show that the algebra does NOT close.
(NOTE: We will see that it is a consequence of exercise (29) and its generalizations that
the elementary soluble problems of dynamics all involve quadratic Hamiltonians, i.e. no
powers of X, K greater than the second. )
The importance of the Lie algebra concept for the analysis of infinitesimal transformations
results from the following result known as the Campbell-Baker-Hausdorff theorem∗
(CBH). If A, B are n × n matrices with sufficiently small norms then
The power of this result lies in its assertion that the computation of C will only involve
commutators and iterated commutators of A, B.
√
Exercise 30: Let v, v0 be fixed 3-component unit vectors, let τ, τ 0 be real numbers,
and let J = (J1 , J2 , J3 ) in which J1 , J2 , J3 are n × n matrices obeying the commutation
relations of exercise (28a). Using only the statement of the CBH theorem prove that if
v · J ≡ v1 J1 + v2 J2 + v3 J3 , then, for sufficiently small τ, τ 0 there must exist a unit vector
v00 and a real number τ 00 such that
0
v 0 ·J 00
v00 ·J
eτ v·J eτ = eτ . (0.123)
What is revealed in (123) is the very important process whereby a group is generated
by exponentiating a Lie algebra. Small values τ correspond to group elements in the
neighborhood of the identity. The exponents thus describe the group in that neighborhood
where the exponential can be approximated by keeping only terms of order τ . Note the
very important fact that the group is thus entirely determined by the commutation relations
of the generators i.e. by the Lie algebra of the generators.
A good example of this process can be seen in the case of the Weyl-Heisenberg group
described earlier. There we have
Since I commutes with everything, it follows from the CBH theorem that we must have:
∗
A discussion of this can be found e.g. W. Miller, Jr. “Symmetry Groups and Their
Applications Academic Press, N.Y 1972 p.161
0 - 24
Hence
eikX e−ixK = (constant)e−ikX eixK (0.126)
(the constants above may be different). But this is precisely what we found in (0.108). In
fact let us see how the factor on the right is determined algebraically. To do so we need
an identity that will come in handy in many instances:
X∞
F −F 1 (n)
e Ge = [F , G], (0.127)
n=0
n!
Then
d
W (λ) = FW − WF = [F, W(λ)]. (0.129)
dλ
Hence µ ¶
dn W(λ)
= [F (n) , W(λ)]λ=0 = [F (n) , G]. (0.130)
dλn λ=0
X∞ µ ¶
λn dn W(λ)
W (λ) = n
. (0.131)
n=0
n! dλ λ=0
As corollaries we have:
Theorem 15: Let M = [F, G] and suppose that F and G both commute with M. Then
eF Ge−F = G + M. (0.132a)
eF eG e−F = eM eG . (0.132b)
eF eG = eM/2 eF +G . (0.132c)
√
Exercise 31: Deduce (108) from (132b).
0 - 25
Additional Note: The abstract definition of a Lie algebra L is this: It is a set that is
closed under forming linear combinations of its elements (i.e. it is a vector space) over the
complex numbers and has a “Lie bracket”, i.e. a binary operation indicated by {, } such
that for any elements a, b, c and complex numbers α, β we have:
√
Exercise 32: Show that the commutator is a Lie bracket. Show that the Poisson
bracket of classical mechanics is also a Lie bracket.
√
Exercise 33: Show that one can turn the elements x of a Lie algebra into operators
denoted Ad(x) that act on the other elements y of the algebra by the following rule:
i.e. the commutator of the Ad operators is the Ad of the Lie bracket of the two elements.
This means that any Lie bracket can be turned into a commutator bracket so that in effect
by studying commutators we are studying the most general kind of Lie bracket.
0 - 26