You are on page 1of 13

Copyright, Pearson Education.

2
Chapter One
1. AC{D, B} = ACDB +ACBD, A{C, B}D = ACBD +ABCD, C{D, A}B = CDAB +
CADB, and {C, A}DB = CADB+ACDB. Therefore AC{D, B}+A{C, B}DC{D, A}B+
{C, A}DB = ACDB +ABCD CDAB +ACDB = ABCD CDAB = [AB, CD]
In preparing this solution manual, I have realized that problems 2 and 3 in are misplaced
in this chapter. They belong in Chapter Three. The Pauli matrices are not even dened in
Chapter One, nor is the math used in previous solution manual. Jim Napolitano
2. (a) Tr(X) = a
0
Tr(1) +

Tr(

)a

= 2a
0
since Tr(

) = 0. Also
Tr(
k
X) = a
0
Tr(
k
) +

Tr(
k

)a

=
1
2

Tr(
k

k
)a

k
Tr(1)a

= 2a
k
. So,
a
0
=
1
2
Tr(X) and a
k
=
1
2
Tr(
k
X). (b) Just do the algebra to nd a
0
= (X
11
+ X
22
)/2,
a
1
= (X
12
+ X
21
)/2, a
2
= i(X
21
+ X
12
)/2, and a
3
= (X
11
X
22
)/2.
3. Since det( a) = a
2
z
(a
2
x
+ a
2
y
) = |a|
2
, the cognoscenti realize that this problem
really has to do with rotation operators. From this result, and (3.2.44), we write
det

exp

i n
2

= cos

i sin

and multiplying out determinants makes it clear that det( a

) = det( a). Similarly, use


(3.2.44) to explicitly write out the matrix a

and equate the elements to those of a.


With n in the z-direction, it is clear that we have just performed a rotation (of the spin
vector) through the angle .
4. (a) Tr(XY )

a
a|XY |a =

b
a|X|bb|Y |a by inserting the identity operator.
Then commute and reverse, so Tr(XY ) =

a
b|Y |aa|X|b =

b
b|Y X|b = Tr(Y X).
(b) XY | = X[Y |] is dual to |(XY )

, but Y | | is dual to |Y

| and X|
is dual to |X

so that X[Y |] is dual to |Y

. Therefore (XY )

= Y

.
(c) exp[if(A)] =

a
exp[if(A)]|aa| =

a
exp[if(a)]|aa|
(d)

a
(x

)
a
(x

) =

a
x

|a

|a =

a
x

|aa|x

= x

|x

= (x

)
5. For basis kets |a
i
, matrix elements of X || are X
ij
= a
i
||a
j
= a
i
|a
j
|

.
For spin-1/2 in the | z basis, +|S
z
= h/2 = 1, |S
z
= h/2 = 0, and, using (1.4.17a),
|S
x
= h/2 = 1/

2. Therefore
|S
z
= h/2S
x
= h/2|
.
=
1

1 1
0 0

6. A[|i +|j] = a
i
|i +a
j
|j = [|i +|j] so in general it is not an eigenvector, unless a
i
= a
j
.
That is, |i + |j is not an eigenvector of A unless the eigenvalues are degenerate.
Copyright, Pearson Education. 3
7. Since the product is over a complete set, the operator

(Aa

) will always encounter


a state |a
i
such that a

= a
i
in which case the result is zero. Hence for any state |

(A a

)| =

(A a

i
|a
i
a
i
| =

(a
i
a

)|a
i
a
i
| =

i
0 = 0
If the product instead is over all a

= a
j
then the only surviving term in the sum is

(a
j
a

)|a
i
a
i
|
and dividing by the factors (a
j
a

) just gives the projection of | on the direction |a

. For
the operator A S
z
and {|a

} {|+, |}, we have

(A a

) =

S
z

h
2

S
z
+
h
2

and

=a

A a

=
S
z
+ h/2
h
for a

= +
h
2
or =
S
z
h/2
h
for a

=
h
2
It is trivial to see that the rst operator is the null operator. For the second and third, you
can work these out explicitly using (1.3.35) and (1.3.36), for example
S
z
+ h/2
h
=
1
h

S
z
+
h
2
1

=
1
2
[(|++|) (||) + (|++|) + (||)] = |++|
which is just the projection operator for the state |+.
8. I dont see any way to do this problem other than by brute force, and neither did the
previous solutions manual. So, make use of +|+ = 1 = | and+| = 0 = |+ and
carry through six independent calculations of [S
i
, S
j
] (along with [S
i
, S
j
] = [S
j
, S
i
]) and
the six for {S
i
, S
j
} (along with {S
i
, S
j
} = +{S
j
, S
i
}).
9. From the gure n =

i cos sin +

j sin sin +

kcos so we need to nd the matrix
representation of the operator S n = S
x
cos sin +S
y
sin sin +S
z
cos . This means we
need the matrix representations of S
x
, S
y
, and S
z
. Get these from the prescription (1.3.19)
and the operators represented as outer products in (1.4.18) and (1.3.36), along with the
association (1.3.39a) to dene which element is which. Thus
S
x
.
=
h
2

0 1
1 0

S
y
.
=
h
2

0 i
i 0

S
z
.
=
h
2

1 0
0 1

We therefore need to nd the (normalized) eigenvector for the matrix

cos cos sin i sin sin


cos sin +i sin sin cos

cos e
i
sin
e
i
sin cos

Copyright, Pearson Education. 4


with eigenvalue +1. If the upper and lower elements of the eigenvector are a and b, respec-
tively, then we have the equations |a|
2
+ |b|
2
= 1 and
a cos + be
i
sin = a
ae
i
sin b cos = b
Choose the phase so that a is real and positive. Work with the rst equation. (The two
equations should be equivalent, since we picked a valid eigenvalue. You should check.) Then
a
2
(1 cos )
2
= |b|
2
sin
2
= (1 a
2
) sin
2

4a
2
sin
4
(/2) = (1 a
2
)4 sin
2
(/2) cos
2
(/2)
a
2
[sin
2
(/2) + cos
2
(/2)] = cos
2
(/2)
a = cos(/2)
and so b = ae
i
1 cos
sin
= cos(/2)e
i
2 sin
2
(/2)
2 sin(/2) cos(/2)
= e
i
sin(/2)
which agrees with the answer given in the problem.
10. Use simple matrix techniques for this problem. The matrix representation for H is
H
.
=

a a
a a

Eigenvalues E satisfy (a E)(a E) a


2
= 2a
2
+E
2
= 0 or E = a

2. Let x
1
and x
2
be the two elements of the eigenvector. For E = +a

2 E
(1)
, (1

2)x
(1)
1
+x
(1)
2
= 0, and
for E = a

2 E
(2)
, (1 +

2)x
(2)
1
+ x
(2)
2
= 0. So the eigenstates are represented by
|E
(1)

.
= N
(1)

2 1

and |E
(2)

.
= N
(2)

2 + 1

where N
(1)
2
= 1/(4 2

2) and N
(2)
2
= 1/(4 + 2

2).
11. It is of course possible to solve this using simple matrix techniques. For example, the
characteristic equation and eigenvalues are
0 = (H
11
)(H
22
) H
2
12
=
H
11
+H
22
2

H
11
H
22
2

2
+ H
2
12

1/2

You can go ahead and solve for the eigenvectors, but it is tedious and messy. However, there
is a strong hint given that you can make use of spin algebra to solve this problem, another
two-state system. The Hamiltonian can be rewritten as
H
.
= A1 +B
z
+C
x
Copyright, Pearson Education. 5
where A (H
11
+ H
22
)/2, B (H
11
H
22
)/2, and C H
12
. The eigenvalues of the rst
term are both A, and the eigenvalues for the sum of the second and third terms are those
of (2/ h) times a spin vector multiplied by

B
2
+C
2
. In other words, the eigenvalues of
the full Hamiltonian are just A

B
2
+ C
2
in full agreement with what we got with usual
matrix techniques, above. From the hint (or Problem 9) the eigenvectors must be
|
+
= cos

2
|1 + sin

2
|2 and |

= sin

2
|1 + cos

2
|2
where = 0, tan = C/B = 2H
12
/(H
11
H
22
), and we do to ip the spin.
12. Using the result of Problem 9, the probability of measuring + h/2 is

2
+| +
1

2
|

cos

2
|+ + sin

2
|

2
=
1
2

1 + cos
2
+

1 cos
2

2
=
1 + sin
2
The results for = 0 (i.e. |+), = /2 (i.e. |S
x
+), and = (i.e. |) are 1/2, 1, and
1/2, as expected. Now (S
x
S
x
)
2
= S
2
x
S
x

2
, but S
2
x
= h
2
/4 from Problem 8 and
S
x
=

cos

2
+| + sin

2
|

h
2
[|+| + |+|]

cos

2
|+ + sin

2
|

=
h
2

cos

2
| + sin

2
+|

cos

2
|+ + sin

2
|

= hcos

2
sin

2
=
h
2
sin
so (S
x
S
x
)
2
= h
2
(1 sin
2
)/4 = h
2
cos
2
/4 = h
2
/4, 0, h
2
4 for = 0, /2, .
13. All atoms are in the state |+ after emerging from the rst apparatus. The second
apparatus projects out the state |S
n
+. That is, it acts as the projection operator
|S
n
+S
n
+ | =

cos

2
|+ + sin

2
|

cos

2
+| + sin

2
|

and the third apparatus projects out |. Therefore, the probability of measuring h/2
after the third apparatus is
P() = |+|S
n
+S
n
+ ||
2
= cos
2

2
sin
2

2
=
1
4
sin
2

The maximum transmission is for = 90

, when 25% of the atoms make it through.


14. The characteristic equation is
3
2()(1/

2)
2
= (1
2
) = 0 so the eigenvalues
are = 0, 1 and there is no degeneracy. The eigenvectors corresponding to these are
1

1
0
1

1
2

2
1

1
2

2
1

The matrix algebra is not hard, but I did this with matlab using
Copyright, Pearson Education. 6
M=[[0 1 0];[1 0 1];[0 1 0]]/sqrt(2)
[V,D]=eig(M)
These are the eigenvectors corresponding to the a spin-one system, for a measurement in
the x-direction in terms of a basis dened in the z-direction. Im not sure if there is enough
information in Chapter One, though, in order to deduce this.
15. The answer is yes. The identity operator is 1 =

,b

|a

, b

, b

| so
AB = AB1 = AB

,b

|a

, b

, b

| = A

,b

|a

, b

, b

| =

,b

|a

, b

, b

| = BA
Completeness is powerful. It is important to note that the sum must be over both a

and b

in order to span the complete set of sets.


16. Since AB = BA and AB|a, b = ab|a, b = BA|a, b, we must have ab = ba where
both a and b are real numbers. This can only be satised if a = 0 or b = 0 or both.
17. Assume there is no degeneracy and look for an inconsistency with our assumptions. If
|n is a nondegenerate energy eigenstate with eigenvalue E
n
, then it is the only state with this
energy. Since [H.A
1
] = 0, we must have HA
1
|n = A
1
H|n = E
n
A
1
|n. That is, A
1
|n is an
eigenstate of energy with eigenvalue E
n
. Since H and A
1
commute, though, they may have
simultaneous eigenstates. Therefore, A
1
|n = a
1
|n since there is only one energy eigenstate.
Similarly, A
2
|n is also an eigenstate of energy with eigenvalue E
n
, and A
2
|n = a
2
|n. But
A
1
A
2
|n = a
2
A
1
|n = a
2
a
1
|n and A
2
A
1
|n = a
1
a
2
|n, where a
1
and a
2
are real numbers.
This cannot be true, in general, if A
1
A
2
= A
2
A
1
so our assumption of no degeneracy must
be wrong. There is an out, though, if a
1
= 0 or a
2
= 0, since one operator acts on zero.
The example given is from a central forces Hamiltonian. (See Chapter Three.) The Hamil-
tonian commutes with the orbital angular momentum operators L
x
and L
y
, but [L
x
, L
y
] = 0.
Therefore, in general, there is a degeneracy in these problems. The degeneracy is avoided,
though for S-states, where the quantum numbers of L
x
and L
y
are both necessarily zero.
18. The positivity postulate says that | 0, and we apply this to | | +|. The
text shows how to apply this to prove the Schwarz Innequality || |||
2
, from
which one derives the generalized uncertainty relation (1.4.53), namely
(A)
2
(B)
2

1
4
|[A, B]|
2
Note that [A, B] = [AA, BB] = [A, B]. Taking A| = B| with

= ,
as suggested, so |A = |B, for a particular state |. Then
|[A, B]| = |AB BA| = 2|(B)
2
|
Copyright, Pearson Education. 7
and the equality is clearly satised in (1.4.53). We are now asked to verify this relationship
for a state | that is a gaussian wave packet when expressed as a wave function x

|. Use
x

|x| = x

|x| xx

| = (x

x)x

|
and x

|p| = x

|p| px

| =
h
i
d
dx

| px

|
with x

| = (2d
2
)
1/4
exp

ipx

h

(x

x)
2
4d
2

to get
h
i
d
dx

| =

p
h
i
1
2d
2
(x

x)

|
and so x

|p| = i
h
2d
2
(x

x)x

| = x

|x|
where is a purely imaginary number. The conjecture is satised.
It is very simple to show that this condition is satised for the ground state of the harmonic
oscillator. Refer to (2.3.24) and (2.3.25). Clearly x = 0 = p for any eigenstate |n, and
x|0 is proportional to p|0, with a proportionality constant that is purely imaginary.
19. Note the obvious typographical error, i.e. S
x
2
should be S
2
x
. Have S
2
x
= h
2
/4 = S
2
y
=
S
2
z
, also [S
x
, S
y
] = i hS
z
, all from Problem 8. Now S
x
= S
y
= 0 for the |+ state.
Then (S
x
)
2
= h
2
/4 = (S
y
)
2
, and (S
x
)
2
(S
y
)
2
= h
4
/16. Also |[S
x
, S
y
]|
2
/4 =
h
2
|S
z
|
2
/4 = h
4
/16 and the generalized uncertainty principle is satised by the equality. On
the other hand, for the |S
x
+ state, (S
x
)
2
= 0 and S
z
= 0, and again the generalized
uncertainty principle is satised with an equality.
20. Refer to Problems 8 and 9. Parameterize the state as | = cos

2
|+ +e
i
sin

2
|, so
S
x
=
h
2

cos

2
+| + e
i
sin

2
|

[|+| + |+|]

cos

2
|+ +e
i
sin

2
|

=
h
2
sin

2
cos

2
(e
i
+ e
i
) =
h
2
sin cos
(S
x
)
2
= S
2
x
S
x

2
=
h
2
4
(1 sin
2
cos
2
) (see prob 12)
S
y
= i
h
2

cos

2
+| +e
i
sin

2
|

[|+| + |+|]

cos

2
|+ +e
i
sin

2
|

= i
h
2
sin

2
cos

2
(e
i
e
i
) =
h
2
sin sin
(S
y
)
2
= S
2
y
S
y

2
=
h
2
4
(1 sin
2
sin
2
)
Copyright, Pearson Education. 8
Therefore, the left side of the uncertainty relation is
(S
x
)
2
(S
y
)
2
=
h
4
16
(1 sin
2
cos
2
)(1 sin
2
sin
2
)
=
h
4
16

1 sin
2
+
1
4
sin
4
sin
2
2

=
h
4
16

cos
2
+
1
4
sin
4
sin
2
2

P(, )
which is clearly maximized when sin 2 = 1 for any value of . In other words, the
uncertainty product is a maximum when the state is pointing in a direction that is 45

with
respect to the x or y axes in any quadrant, for any tilt angle relative to the z-axis. This
makes sense. The maximum tilt angle is derived from
P

2 cos sin + sin


3
cos (1) = cos sin (2 + sin
2
) = 0
or sin = 1/

2, that is, 45

with respect to the z-axis. It all hangs together. The


maximum uncertainty product is
(S
x
)
2
(S
y
)
2
=
h
4
16

1
2
+
1
4
1
4

=
9
256
h
4
The right side of the uncertainty relation is |[S
x
, S
y
]|
2
/4 = h
2
|S
z
|
2
/4, so we also need
S
z
=
h
2

cos
2

2
sin
2

2

=
h
2
cos
so the value of the right hand side at maximum is
h
2
4
|S
z
|
2
=
h
2
4
h
2
4
1
2
=
8
256
h
4
and the uncertainty principle is indeed satised.
21. The wave function is x|n =

2/a sin(nx/a) for n = 1, 2, 3, . . ., so we calculate


x|x|n =

a
0
n|xxx|ndx =
a
2
x|x
2
|n =

a
0
n|xx
2
x|ndx =
a
2
6

3
n
2

2
+ 2

(x)
2
=
a
2
6

3
n
2

2
+ 2
6
4

=
a
2
6

3
n
2

2
+
1
2

x|p|n =

a
0
n|x
h
i
d
dx
x|ndx = 0
x|p
2
|n = h
2

a
0
n|x
d
2
dx
2
x|ndx =
n
2

2
h
2
a
2
= (p)
2
Copyright, Pearson Education. 9
(I did these with maple.) Since [x, p] = i h, we compare (x)
2
(p)
2
to h
2
/4 with
(x)
2
(p)
2
=
h
2
6

3 +
n
2

2
2

=
h
2
4

n
2

2
3
2

which shows that the uncertainty principle is satised, since n


2
/3 > n > 3 for all n.
22. Were looking for a rough order of magnitude estimate, so go crazy with the approx-
imations. Model the ice pick as a mass m and length L, standing vertically on the point,
i.e. and inverted pendulum. The angular acceleration is

, the moment of inertia is mL
2
and the torque is mgLsin where is the angle from the vertical. So mL
2

= mgLsin or

g/Lsin . Since 0 as the pick starts to fall, take sin = so


(t) = Aexp

g
L
t

+Bexp

g
L
t

x
0
(0)L = (A +B)L
p
0
m

(0)L = m

g
L
(A B)L =

m
2
gL(A B)
Let the uncertainty principle relate x
0
and p
0
, i.e. x
0
p
0
=

m
2
gL
3
(A
2
B
2
) = h. Now
ignore B; the exponential decay will become irrelevant quickly. You can notice that the
pick is falling when it is tilting by something like 1

= /180, so solve for a time T where


(T) = /180. Then
T =

L
g
ln
/180
A
=

L
g

1
4
ln
m
2
gL
3
h
2
ln
180

Take L = 10 cm, so

L/g 0.1 sec, but the action is in the logarithms. (It is worth your
time to conrm that the argument of the logarithm in the rst term is indeed dimensionless.)
Now ln(180/) 4 but the rst term appears to be much larger. This is good, since it means
that quantum mechanics is driving the result. For m = 0.1 kg, nd m
2
gL
3
/ h
2
= 10
64
, and
so T = 0.1 sec (147/4 4) 3 sec. Id say thats a surprising and interesting result.
23. The eigenvalues of A are obviously a, with a twice. The characteristic equation for
B is (b )()
2
(b )(ib)(ib) = (b )(
2
b
2
) = 0, so its eigenvalues are b with b
twice. (Yes, B has degenerate eigenvalues.) It is easy enough to show that
AB =

ab 0 0
0 0 iab
0 iab 0

= BA
so A and B commute, and therefore must have simultaneous eigenvectors. To nd these,
write the eigenvector components as u
i
, i = 1, 2, 3. Clearly, the basis states |1, |2, and |3
are eigenvectors of A with eigenvalues a, a, and a respectively. So, do the math to nd
Copyright, Pearson Education. 10
the eigenvectors for B in this basis. Presumably, some freedom will appear that allows us
to linear combinations that are also eigenvectors of A. One of these is obviously |1 |a, b,
so just work with the reduced 2 2 basis of states |2 and |3. Indeed, both of these states
have eigenvalues a for A, so one linear combinations should have eigenvalue +b for B, and
orthogonal combination with eigenvalue b.
Let the eigenvector components be u
2
and u
3
. Then, for eigenvalue +b,
ibu
3
= +bu
2
and ibu
2
= +bu
3
both of which imply u
3
= iu
2
. For eigenvalue b,
ibu
3
= bu
2
and ibu
2
= bu
3
both of which imply u
3
= iu
2
. Choosing u
2
to be real, then (No, the eigenvalue alone
does not completely characterize the eigenket.) we have the set of simultaneous eigenstates
Eigenvalue of
A B Eigenstate
a b |1
a b
1

2
(|2 +i|3)
a b
1

2
(|2 i|3)
24. This problem also appears to belong in Chapter Three. The Pauli matrices are not
dened in Chapter One, but perhaps one could simply dene these matrices, here and in
Problems 2 and 3.
Operating on the spinor representation of |+ with (1

2)(1 + i
x
) gives
1

1 0
0 1

+i

0 1
1 0

1
0

=
1

1 i
i 1

1
0

=
1

1
i

So, for an operator U such that U


.
= (1

2)(1+i
x
), we observe that U|+ = |S
y
; +, dened
in (1.4.17b). Similarly operating on the spinor representation of | gives
1

1 0
0 1

+ i

0 1
1 0

0
1

=
1

1 i
i 1

0
1

=
1

i
1

=
i

1
i

that is, U| = i|S


y
; . This is what we would mean by a rotation about the x-axis by
90

. The sense of the rotation is about the +x direction vector, so this would actually be
a rotation of /2. (See the diagram following Problem Nine.) The phase factor i = e
i/2
does not aect this conclusions, and in fact leads to observable quantum mechanical eects.
(This is all discussed in Chapter Three.) The matrix elements of S
z
in the S
y
basis are then
S
y
; +|S
z
|S
y
; + = +|U

S
z
U|+
S
y
; +|S
z
|S
y
; = i+|U

S
z
U|
S
y
; |S
z
|S
y
; + = i|U

S
z
U|+
S
y
; |S
z
|S
y
; = |U

S
z
U|
Copyright, Pearson Education. 11
Note that

x
=
x
and
2
x
= 1, so U

U
.
= (1

2)(1 i
x
)(1

2)(1 +i
x
) = (1/2)(1 +
2
x
) = 1
and U is therefore unitary. (This is no accident, as will be discussed when rotation operators
are presented in Chapter Three.) Furthermore
z

x
=
x

z
, so
U

S
z
U
.
=
1

2
(1 i
x
)
h
2

z
1

2
(1 +i
x
) =
h
2
1
2
(1 i
x
)
2

z
= i
h
2

z
= i
h
2

0 1
1 0

1 0
0 1

=
h
2

0 i
i 0

so S
z
.
=
h
2

0 1
1 0

=
h
2

x
in the |S
y
; basis. This can be easily checked directly with (1.4.17b), that is
S
z
|S
y
; =
h
2
1

2
[|+ i| =
h
2
|S
y
;
There seems to be a mistake in the old solution manual, nding S
z
= ( h/2)
y
instead of
x
.
25. Transforming to another representation, say the basis |c, we carry out the calculation
c

|A|c

|b

|A|b

|c

There is no principle which says that the c

|b

need to be real, so c

|A|c

is not necessarily
real if b

|A|b

is real. The problem alludes to Problem 24 as an example, but not that


specic question (assuming my solution is correct.) Still, it is obvious, for example, that the
operator S
y
is real in the |S
y
; basis, but is not in the | basis.
For another example, also suggested in the text, if you calculate
p

|x|p

|x|x

|p

dx

|x

|p

dx

=
1
2 h

e
i(p

)x

/h
dx

and then dene q p

and y x

/ h, then
p

|x|p

=
h
2i
d
dq

e
iqy
dy =
h
i
d
dq
(q)
so you can also see that although x is real in the |x

basis, it is not so in the |p

basis.
26. From (1.4.17a), |S
x
; = (|+ |)/

2, so clearly
U
.
=
1

1 1
1 1

1/

2 1/

2
0 0

0 0
1/

2 1/

1/

2
1/

[1 0] +

1/

2
1/

[0 1]
= = |S
x
: ++| + |S
x
: |
.
=

r
|b
(r)
a
(r)
|
Copyright, Pearson Education. 12
27. The idea here is simple. Just insert a complete set of states. Firstly,
b

|f(A)|b

|f(A)|a

|b

f(a

)b

|a

|b

The numbers a

|b

(and b

|a

) constitute the transformation matrix between the two


sets of basis states. Similarly for the continuum case,
p

|F(r)|p

|F(r)|x

|p

d
3
x

F(r

)p

|x

|p

d
3
x

=
1
(2 h)
3

F(r

)e
i(p

)x

/h
d
3
x

The angular parts of the integral can be done explicitly. Let q p

dene the z-
direction. Then
p

|F(r)|p

=
2
(2 h)
3

dr

F(r


0
sin de
iqr

cos /h
=
1
4
2
h
3

dr

F(r

1
1
d e
iqr

/h
=
1
4
2
h
3

dr

F(r

)
h
iqr

2i sin(qr

/ h) =
1
2
2
h
2

dr

F(r

)
sin(qr

/ h)
qr

28. For functions f(q, p) and g(q, p), where q and p are conjugate position and momentum,
respectively, the Poisson bracket from classical physics is
[f, g]
classical
=
f
q
g
p

f
p
g
q
so [x, F(p
x
)]
classical
=
F
p
x
Using (1.6.47), then, we have

x, exp

ip
x
a
h

= i h

x, exp

ip
x
a
h

classical
= i h

p
x
exp

ip
x
a
h

= a exp

ip
x
a
h

To show that exp(ip


x
a/ h)|x

is an eigenstate of position, act on it with x. So


x exp

ip
x
a
h

|x

exp

ip
x
a
h

x a exp

ip
x
a
h

|x

= (x

a) exp

ip
x
a
h

|x

In other words, exp(ip


x
a/ h)|x

is an eigenstate of x with eigenvalue x

a. That is
exp(ip
x
a/ h)|x

is the translation operator with x

= a, but we knew that. See (1.6.36).


29. I wouldnt say this is easily derived, but it is straightforward. Expressing G(p) as a
power series means G(p) =

nm
a
nm
p
n
i
p
m
j
p

k
. Now
[x
i
, p
n
i
] = x
i
p
i
p
n1
i
p
n
i
x
i
= i hp
n1
i
+p
i
x
i
p
n1
i
p
n
i
x
i
= 2i hp
n1
i
+p
2
i
x
i
p
n2
i
p
n
i
x
i
. . .
= ni hp
n1
i
so [x
i
, G(p)] = i h
G
p
i
Copyright, Pearson Education. 13
The procedure is essentially identical to prove that [p
i
, F(x)] = i hF/x
i
. As for
[x
2
, p
2
] = x
2
p
2
p
2
x
2
= x
2
p
2
xp
2
x +xp
2
x p
2
x
2
= x[x, p
2
] + [x, p
2
]x
make use of [x, p
2
] = i h(p
2
)/p = 2i hp so that [x
2
, p
2
] = 2i h(xp+px). The classical Poisson
bracket is [x
2
, p
2
]
classical
= (2x)(2p) 0 = 4xp and so [x
2
, p
2
] = i h[x
2
, p
2
]
classical
when we let
the (classical quantities) x and p commute.
30. This is very similar to problem 28. Using problem 29,
[x
i
, J(l)] =

x
i
, exp

ip l
h

= i h

p
i
exp

ip l
h

= l
i
exp

ip l
h

= l
i
J(l)
We can use this result to calculate the expectation value of x
i
. First note that
J

(l) [x
i
, J(l)] = J

(l)x
i
J(l) J

(l)J(l)x
i
= J

(l)x
i
J(l) x
i
= J

(l)l
i
J(l) = l
i
Therefore, under translation,
x
i
= |x
i
| |J

(l)x
i
J(l)| = |J

(l)x
i
J(l)| = |(x
i
+ l
i
)| = x
i
+l
i
which is exactly what you expect from a translation operator.
31. This is a continued rehash of the last few problems. Since [x, J(dx

)] = dx

by (1.6.25),
and since J

[x, J] = J

xJ x, we have J

(dx

)xJ(dx

) = x+J

(dx

)dx

= x+dx

since
we only keep the lowest order in dx

. Therefore x x + dx

. Similarly, from (1.6.45),


[p, J(dx

)] = 0, so J

[p, J] = J

pJ p = 0. That is J

pJ = p and p p.
32. These are all straightforward. In the following, all integrals are taken with limits from
to . One thing to keep in mind is that odd integrands give zero for the integral, so
the right change of variables can be very useful. Also recall that

exp(ax
2
)dx =

/a,
and

x
2
exp(ax
2
)dx = (d/da)

exp(ax
2
)dx =

/2a
3/2
. So, for the x-space case,
p =

|x

|p|dx

|x

h
i
d
dx

|dx

=
1
d

hk exp

2
d
2

dx

= hk
p
2
= h
2

|x

d
2
dx

2
x

|dx

=
h
2
d

exp

ikx

2
2d
2

d
dx

ik
x

d
2

exp

ikx

2
2d
2

dx

=
h
2
d

1
d
2
+

ik
x

d
2

exp

2
d
2

dx

= h
2

1
d
2
+k
2

h
2
d
5

2
exp

2
d
2

dx

= h
2

1
d
2
+ k
2

h
2
2d
2
=
h
2
2d
2
+ h
2
k
2
Copyright, Pearson Education. 14
Using instead the momentum space wave function (1.7.42), we have
p =

|p|p

|dp

|p

||
2
dp

=
d
h

exp

(p

hk)
2
d
2
h
2

dp

=
d
h

(q + hk) exp

q
2
d
2
h
2

dq = hk
p
2
=
d
h

(q + hk)
2
exp

q
2
d
2
h
2

dq =
d
h

2
h
3
d
3
+ ( hk)
2
=
h
2
2d
2
+ h
2
k
2
33. I cant help but think this problem can be done by creating a momentum translation
operator, but instead I will follow the original solution manual. This approach uses the
position space representation and Fourier transform to arrive the answer. Start with
p

|x|p

|x|x

|p

dx

|x

|p

dx

=
1
2 h

exp

i
(p

) x

dx

= i

p

1
2

exp

i
(p

) x

dx

= i h

p

(p

)
Now nd p

|x| by inserting a complete set of states |p

, that is
p

|x| =

|x|p

|dp

= i h

p

(p

)p

|dp

= i h

p

|
Given this, the next expression is simple to prove, namely
|x| =

dp

|p

|x| =

dp

(p

)i h

p

(p

)
using the standard denition

(p

) p

|.
Certainly the operator T () exp(ix/ h) looks like a momentum translation operator. So,
we should try to work out pT ()|p

= p exp(ix/ h)|p

and see if we get |p

+ . Take a
lesson from problem 28, and make use of the result from problem 29, and we have
pT ()|p

= {T ()p + [p, T ()]}|p

T () i h

x
T ()

|p

= (p

+)T ()|p

and, indeed, T ()|p

is an eigenstate of p with eigenvalue p

+ . In fact, this could have


been done rst, and then write down the translation operator for innitesimal momenta, and
derive the expression for p

|x| the same way as done in the text for innitesimal spacial
translations. (I like this way of wording the problem, and maybe it will be changed in the
next edition.)