Sie sind auf Seite 1von 101

Lectures on Riemann matrices

By
C.L. Siegel
Tata Institute of Fundamental Research, Bombay
1963
Lectures on Riemann matrices
By
C.L. Siegel
Notes by
S. Raghavan
and
S.S. Rangachari
No part of this book may be reproduced in any
form by print, microlm or any other means with-
out written permission from the Tata Institute of
Fundamental Research, Colaba, Bombay 5
Tata Institute of Fundamental Research
Bombay
1963
Forword
The following lecture notes were carefully prepared by Dr.S. Raghavan
and Dr.S.S. Rangachari. I thank them for their most valuable collabora-
tion.
Carl Ludwig Siegel
iii
Contents
1 Chapter 1 1
1 Introduction: Abelian Functions . . . . . . . . . . . . . 1
2 The commutator-algebra of a R-matrix . . . . . . . . . . 5
3 Division algebras over Q with a positive involution . . . 13
4 Cyclic algebras . . . . . . . . . . . . . . . . . . . . . . 27
5 Division algebras over Q... . . . . . . . . . . . . . . . . 32
6 Positive involutions of the second kind in division algebras 36
7 Existence of R-matrices with given commutator-algebra . 41
8 Modular groups associated with Riemann matrices . . . 81
v
Chapter 1
1 Introduction: Abelian Functions
1
In this course of lectures, we shall be concerned with a systematic study
of Riemann matrices which arise in a natural way from the theory of
abelian functions. This introductory article will be devoted to explaining
this connections.
Let u
1
, . . . , u
n
be n independent complex variables and 1et u =
_
u
1
.
.
.
u
n
_
.
We shall denote by C
n
, the n-dimensional complex euclidean space and
by C, the eld of complex numbers. Let f (u) be an abelian function of
u; in other words, f (u) is a complex-valued function dened and mero-
morphic in C
n
and having 2n periods
1
, . . . , w
2n
linearly independent
over the eld of real numbers (i.e. for 1 i 2n, f (u +
i
) = f (u)).
We suppose further that f (u) is a non-degenerate abelian function i.e.
there does not exist any complex linear transformation of the variables
u
1
, . . . , u
n
such that f (u) can be brought to depend on strictly less than
n complex variables.
The periods of f (u) form a lattice in C
n
, which we may assume,
without loss of generality, to be generated by
1
, . . . ,
2n
over the ring
Z of rational integers. The matrix P = (
1

2
. . .
2n
) of n rows and 2n
columns is called a period-matrix of the lattice . Any other period-
matrix P
1
of is of the form P U where U is unimodular (i.e. U is a
rational integral matrix of determinant 1).
The abelian functions admitting all elements of as periods, form
a eld G. It is known that there exist n + 1 abelian functions f
0
(u),
1
2 Chapter 1
f
1
(u), . . . , f
n
(u) in Gsuch that f
1
(u), . . . , f
n
(u) are algebraically indepen- 2
dent over C (and, in fact, even analytically independent), f
0
(u) depends
algebraically upon f
1
(u), . . . , f
n
(u) and further
G = C( f
0
(u), . . . , f
n
(u)). In other words, G is an algebraic function
eld of n variables over C.
Let now L be another eld of abelian functions of the form g(u) =
f (K
1
u) for f (u) G and xed complex nonsingular matrix K. Let us
further, suppose that L has period-lattice contained in . Then it is
easy to show that L is an algebraic extension of G. Moreover, if Q is a
period-matrix of , then, on the one hand, Q = KPU for a unimodular
U and, on the other hand, Q = PG
1
for a nonsingular rational integral
matrix G
1
. Thus we have
KP = PG (1)
with complex nonsingular K and rational integral G. We call any such K,
a complex multiplication of P and G, a multiplier of P. Our object is to
study the nature of the set of K and G satisfying the matrix equation (1).
To this end, we rst relax our conditions and ask for all rational 2n-
rowed square matrices M satisfying the condition
KP = PM (2)
with a suitable complex matrix K. It is easy to verify that the set of such
M is an algebra M of nite rank over the eld Q of rational numbers.
We denote this abstract algebra by M, while the set of matrices M give
a matrix representation of Mwhich we denote by (M).
For the period-matrix P, there exists a rational 2n-rowed alternate 3
non-singular matrix A such that
i) PA
1
P

= 0 (3)
and ii) H =

1PA
1
P

> 0 (i.e. positive hermitian)


We call A, a principal matrix for P.
Denition . Any complex matrix P of n rows and 2n columns
satisfying (3) for some principal matrix A is called a (n-rowed) Riemann
matrix.
1. Introduction: Abelian Functions 3
Conditions (3) are known as Riemanns period relations. In the case
when A =
_
0 E
E 0
_
(E being the n-rowed identity matrix), conditions (3)
were given by Riemann [16] as precisely the conditions to be satised by
the periods of a normalized complete system of abelian integrals of the
rst kind on a Riemann surface of genus n. It was shown by Poincare
that conditions (3) are necessary and sucient for P to be a period-
matrix of a nondegenerate abelian function.
Let now Q =
_
P
P
_
. Then conditions (3) may be rewritten as
i Q A
1
Q

_
H 0
0 H
_
(i =

1) (4)
with H positive hermitian. If W = iQA
1
Q

, then W and therefore Q


are nonsingular. We may now reformulate (2) as
TQ = QM (5)
where T =
_
K 0
0 K
_
.
Following H. Weyl, we introduce the 2n-rowed matrix L =
_
iE 0
0 iE
_
and consider, instead of P, the matrix
R = Q
1
LQ. (6)
Under the transformation P DP or equivalently Q
_
D 0
0 D
_
Q 4
(with arbitrary complex nonsingular D), R remains unchanged. If P
is a period-matrix, this has the signicance that R as dened by (6) is in-
dependent of the choice of the dierentials du
1
, . . . , du
n
of the rst kind
on the abelian variety associated with P.
The advantage in working with R is that in the rst place R is real
as we shall see presently and, further, that equation (5) may be written
simpler as
RM = MR (7)
using the fact that LT = TL. Thus M has to be just a 2n-rowed rational
matrix commuting with R. Conversely, if M is such a matrix, then den-
ing T = QMQ
1
, we have LT = TL. But, from the form of L, we see
that T =
_
K 0
0 K
1
_
with n-rowed square matrices K and K
1
. But TQ = QM
4 Chapter 1
gives KP = PM = PM = K
1
P which, in turn, leads to K = K
1
(since P
is of rank n). Thus the rational solutions M of (7) are the same as those
of (5).
Proposition 1. The matrix R dened by (6) has the following properties:
(i) R is real
(ii) R
2
= E(E being the 2n-rowed identity matrix)
and (iii) S = AR is positive symmetric.
Conversely, any 2n-rowed rational matrix having properties (i), (ii)
and (iii) leads to a Riemann matrix P which is uniquely determined upto 5
a left-sided nonsingular factor and P has A for a principal matrix.
Proof. Let V =
_
0 E
E 0
_
with E being the n-rowed identity matrix. Since
Q = VQ and V
1
LV = L, we have R = Q
1
L Q = Q
1
V
1
LVQ = R,
which proves (i). From L
2
= E, (ii) follows. To prove (iii), we set
F = iQA
1
Q

=
_
H 0
0 H
_
. Then F = F

and S = AR = AQ
1
LQ =
iQ

F
1
LQ = Q

F
1
_
E 0
0 E
_
Q. But F
1
_
E 0
0 E
_
=
_
H
1
0
0 H
1
_
is positive
hermitian and so is its transform S . Since S is real, our assertion (iii) is
proved.
Conversely, let R have the properties (i), (ii) and (iii). From (ii), the
eigen-values of R are +i and i and they occur with the same multiplic-
ity n, since the characteristic equation of R is of degree 2n and has real
coecients. Thus it may be seen that there is a complex non-singular
matrix C such that R = C
1
LC. If C
0
also satises C
1
0
LC
0
= R, then
C
0
=
_
B
1
0
0 B
2
_
C with complex n-rowed non-singular matrices B
1
and B
2
.
Now from (i), C
1
LC = C
1
L C = (VC)
1
L(VC) since L = V
1
LV
so that VC =
_
B
1
0
0 B
2
_
C or C =
_
0 B
2
B
1
0
_
C. Splitting up C as
_
C
1
C
2
_
with n-rowed C
1
, we have C
1
= B
2
C
2
and C
2
= B
1
C
1
. We may
now choose Q =
_
C
1
B
2
C
2
_
=
_
E 0
0 B
2
_ _
C
1
C
2
_
. Then Q
1
LQ = R and if
we denote C
1
as P, Q =
_
P
P
_
. We shall prove that P is a Riemann 6
matrix having A for a principal matrix. In fact, from (iii), we have
AQ
1
LQ = Q

F
1
_
E 0
0 E
_
Q is positive hermitian, where F = iQA
1
Q

.
2. The commutator-algebra of a R-matrix 5
But then this means F
1
_
E 0
0 E
_
is hermitian and positive. Therefore,
_
E 0
0 E
_
F is again positive hermitian. Writing F =
_
F
1
F
2
F

2
F
3
_
we have
_
F
1
F
2
F

2
F
3
_
=
_
F

1
F
2
F

2
F

3
_
. Thus F
2
= 0 and F
1
, F
3
are positive hermitian.
Writing F =
_
H 0
0 H
1
_
where H, H
1
are positive hermitian, it is trivial to
see H
1
= H. Thus our proposition is completely proved.
For the sake of brevity, we shall call a matrix R having properties
(i), (ii) and (iii) mentioned in Proposition 1, a R-matrix. A real matrix
satisfying just condition (iii) is referred to by H. Weyl [27] as a gener-
alized Riemann matrix. We shall call the matrix A, a principal matrix
for R, too.

2 The commutator-algebra of a R-matrix


In the last section we reduced the problem of nding the set of rational
matrices M satisfying (2) for a suitable complex K, to that of nding
all 2n-rowed rational matrices M which commute with a 2n-rowed R-
matrix R. We may now forget the period matrix P which gave rise to R
and work with R instead. As we remarked, the set of such commutators
M of R is an algebra (M) of nite rank over Q.
We shall now see that in (M), we have an involution M M

; this
involution is known as the Rosati involution. Further, it is a positive
involution in the sense that for any M (M), the trace (MM

) of MM

7
is a positive rational number unless M = 0.
(For a complex square matrix X, we denote the trace by (X) and
the determinant by |X|).
Proposition 2. We have in M, a positive involution.
Proof. For any 2n-rowed complex square matrix W, dene
W

= A
1
W

A.
Then it is easy to verify that
(W
1
W
2
)

= A
1
(W
1
W
2
)

A = W

1
W

2
6 Chapter 1
(cW
1
)

= cW

1
for any c C. (8)
(W
1
W
2
)

= A
1
W

2
W

1
A = W

2
W

1
(W

= A
1
(A
1
W

A)

A = W
If M (M), then MR = RM and by (8), M

= R

. But
R

= A
1
R

A = A
1
S = R. Thus M

R = RM

. Further M

is a
rational matrix and therefore M

(M). We now obtain from (8) that


the mapping M M

of (M) is an anti-automorphism of order 2, i.e.


an involution.
Clearly (MM

) = (MA
1
M

A) = (MRS
1
M

S R
1
) =
(RMS
1
M

S R
1
) = (MS
1
M

S ) = (MC
1
C
1
M

C) =
(CMC
1
C
1
M

) where C is a real nonsingular matrix such that


S = C

C. Now setting G = CMC


1
, we have (MM

) = (GG

)
which is strictly positive for G 0 and zero for G = 0. Equivalently,
(MM

) > 0 for M 0 in (M) and (MM

) = 0 for M=0.
We shall see later that the property of (M) mentioned in Proposition 8
2 serves to characterise the algebra of multiplications of a Riemann ma-
trix. More precisely, we shall prove that, except in some very special
cases, any matrix algebra over Q carrying a positive involution can be
realized as the algebra of multiplications of a Riemann matrix. To this
end, we need to prove some preliminary results.
A 2n-rowed R-matrix R is said to be reducible, if there exists a ra-
tional 2n-rowed non-singular matrix C
1
such that
C
1
1
RC
1
=
_
R
1
R
12
0 R
2
_
(9)
where R
1
is a matrix of n
1
(< 2n) rows and n
1
columns. Otherwise, we
say that R is irreducible.
Let us remark that if R is a reducible R-matrix, it is not a priori
obvious from the form (9) of C
1
1
RC
1
whether R
1
and R
2
are again R-
matrices and whether atleast n
1
is even. We obtain clear information
about this from
Theorem 1 (Poincare, [12]). If R is a 2n-rowed reducible R-matrix then
there exists a rational non-singular matrix C such that
C
1
RC =
_
R
1
0
0 R
2
_
(10)
2. The commutator-algebra of a R-matrix 7
where R
1
and R
2
are again R-matrices of 2r and 2(n r) rows respec-
tively.
Proof. We may take R already in the form
_
R
1
R
12
0 R
2
_
, without loss of gen-
erality. Let n
1
and n
2
(= 2n n
1
) be the number of rows of R
1
and R
2
9
respectively and let E
i
be the identity matrix of n
i
rows (i = 1, 2). It is
then enough to nd rst a suitable rational X such that for C =
_
E
1
X
0 E
2
_
,
we have C
1
RC in the form (10). Now
C
1
RC =
_
E
1
X
2
0 E
2
_ _
R
1
R
12
0 R
2
_ _
E
1
X
0 E
2
_
=
_
R
1
R
12
+R
1
XXR
2
0 R
2
_
.
If X is rational and satises R
12
+ R
1
X XR
2
= 0, we will be through.
Breaking up A as
_
A
1
A
12
A

12
A
2
_
and S as
_
S
1
S
12
S

12
S
2
_
in a similar way, we see
that A
1
is a nonsingular alternate matrix, since S
1
= A
1
R
1
is positive
symmetric. Thus n
1
is even and let n
1
= 2r (say). Further from A
1
R
1
=
S
1
= R

1
A
1
, we have
A
1
R
12
+ A
12
R
2
= S
12
= (A

12
R
1
)

= R

1
A
12
= A
1
R
1
A
1
1
A
12
.
Setting X = A
1
1
A
12
, we have a rational matrix X satisfying R
12
+R
1
X
XR
2
= 0.
To complete the proof, we rst remark that if R is replaced by C
1
RC, then A, S and M have respectively to be replaced by C

AC, C

SC
and C
1
MC. Now
C

AC =
_
E
1
0
X

E
2
_ _
A
1
A
12
A

12
A
2
_ _
E
1
X
0 E
2
_
=
_
A
1
0
0 A
3
_
where A
3
= A
2
A

12
A
1
1
A
12
is again an alternate 2(n r)-rowed non-
singular matrix. Further from the form of C

AC and C
1
RC, it is clear
that C

SC =
_
S
1
0
0 S
2
_
where S
1
and S
2
are positive symmetric matrices
of 2r and 2(nr) rows respectively. From R
2
= E, we have R
2
1
= E
1
, 10
R
2
2
= E
2
and from C

SC > 0, we see that A


1
R
1
= S
1
and A
3
R
2
= S
2
are again positive symmetric. Thus R
1
and R
2
are again R-matrices.
Remarks. (1) Theorem 1 was proved by Poincare only in the spe-
cial case when the underlying abelian variety is the Jacobian of a
Riemann surface of genus n (see also p.133, [26]).
8 Chapter 1
(2) If R is reducible and MR = RM, then although C
1
MC commutes
with C
1
RC, it is not necessary that C
1
MC should reduce to the
form (10).
(3) In terms of period matrices, the transformation R C
1
RC cor-
responds to the transformation P PC where P is a period ma-
trix associated to R (by Proposition 1). From (10), we can prove
that PC =
_
P
1
0
0 P
2
_
where P
1
and P
2
are period matrices of r and
n r rows respectively. The eld of abelian functions having PC
for a period matrix is the composite of the elds of abelian func-
tions having P
1
and P
2
for period matrices respectively.
Applying the reduction above successively, we can split R into irre-
ducible R-matrices.
If A, B, C, . . . are nitely many square matrices, then [A, B, C, . . .]
shall stand for the direct sum of A, B, C, . . .. With this notation, we can
nd by Theorem 1, a rational 2n-rowed non-singular matrix C such that
C
1
RC = [R
1
, R
2
, . . .], (11)
and correspondingly
C

AC = [A
1
, A
2
, . . .].
If two of the matrices R
i
occurring on the right hand side in (11) are 11
equivalent, say R
2
= C
1
1
R
1
C
1
, for a rational non-singular C
1
, then,
replacing C by C
_
E
1
0 0
0 C
1
1
0
0 0 E
_
, we could suppose that already R
1
= R
2
.
In this process of changing C, A
2
gets replaced by C

1
A
2
C
1
. Now if
C

1
A
2
C
1
is not equal to A
1
, we could change the matrix A we started
from suitably so that this would be true. Thus grouping the equivalent
matrices R
i
in (11) together and choosing C properly, we could suppose
that
C
1
RC =
[R
1
, R
2
, . . .]
f
1
f
2
(12)
where R
j
is a R-matrix repeated f
j
times in the direct sum. Correspond-
ingly we may suppose that
C

AC =
[A
1
, A
2
, . . .]
f
1
f
2
(13)
2. The commutator-algebra of a R-matrix 9
where, again, A
j
are repeated f
j
times. Now in (12), R
j
is not equivalent
to R
k
over Q for j k. On the other hand, it could happen that A
j
= A
k
for j k, in (13).
We shall suppose, in the sequel, that R and A are already in the form
given on the right hand side of (12) and (13) respectively.
Let us consider the set of linear equations dened by the single ma-
trix equation RM = MR. This is a system of 4n
2
linear equations in 4n
2
unknowns with roal coeecients, namely the elements of R. In order to
reduce this to a set a equations with rational coecients, we shall adopt 12
the following procedure.
Let
1
,
2
, . . .,
p
be a maximal set of elements r
kl
of R which are
linearly independent over Q. We may then write
R =
1
L
1
+ . . . +
p
L
p
(14)
where L
1
, . . . L
p
are rational 2n-rowed square matrices. Denote by T
the abstract algebra generated by L
1
, . . . , L
p
over Q and by (T ), the ma-
trix representation by the L

i
s. In other words, (T ) is the algebra consist-
ing of elements T of the form T =
_
1k
1
,...,k
m
p
a
k
1
. . .
k
m
L
k
1
. . . L
k
m
(a
k
1
...k
m
Q) and the 2n-rowed identity E. By denition, T is uniquely deter-
mined by R, since a change of
1
, . . . ,
p
would merely involve taking
instead of L
1
, . . . L
p
matrices T
1
, . . . T
p
which are rational linear combi-
nations of L
1
, . . . , L
p
and vice versa.
Incidentally, we remark that the determination of L
1
, . . . , L
p
in (14)
is not all that simple as it appears. For, take the the simple 2-rowed
R-matrix
_
0 1/

0
_
with A =
_
0 1
1 0
_
and begin Eulers constant. It
is rather ironical that one does not know whether

and 1/

are
linearly independent over Q.
The relationship between (M) and (T ) is given by
Proposition 3. The algebra (M) is the commutator algebra of (T ).
(Denition. By the commutator algebra of (T ), we mean the set of
all 2n-rowed rational square matrices M for which T M = MT for all
T (T )).
Proof. For each M (M), we have 13
10 Chapter 1
0 = RM MR =
p

j=1

j
(L
j
M ML
j
).
But now
1
, . . . ,
p
being linearly independent over Q and since L
j
M
ML
j
, j = 1, 2, . . . , p are rational matrices, we deduce that L
j
M = ML
j
for 1 j p. Hence T M = MT for all T (T ). The converse is trivial,
since if M is in the commutator algebra of (T ), then M commutes with
L
j
for 1 j p and hence with R by (14). Thus (M) is precisely
commutator algebra of (T ).
Proposition 4. The algebra (T ) admits the involution T T

=
A
1
T

A.
Proof. First of all, we see that for the basis elements L
j
, j = 1, 2,
p
of (T ), we have L

j
= L
j
. For, from A

= A, S = S

, we have
R

= R and further R

= (
1
L
1
+ . . . +
p
L
p
)

=
1
L

1
+ . . . +
p
L

p
=
(
1
L
1
+ . . . +
p
L
p
). In other words, we have

1
(L
1
+ L

1
) + . . . +
p
(L
p
+ L

p
) = 0.
Again, since L
j
+ L

j
, 1 j p are rational and
1
, . . . ,
p
are linearly
independent over Q, we have L

j
= L
j
(1 j p). And now, for any
T =
_
1k
1
,...,k
m
p
a
k
1
...k
m
L
k
1
. . . L
k
m
, we see that
T

a
k
1
...k
m
L

k
m
. . . L

k
1
=

1k
1
,...,k
m
p
a
k
1
...k
m
L
k
m
. . . L
k
1
(T ).
That the mapping T T

is an involution of (T ) is quite clear.


Let us remark that the involution T T

of (T ) is not necessarily 14
a positive involution. The fact that S = AR is symmetric is equivalent
to the fact that (T ) is closed under an involution T T

such that
L

j
= L
j
, 1 j p. Therefore, the condition that S is positive
symmetric is much stronger than (T ) admitting the special involution
T T

.
Since R =
[R
1
, R
2
, ...]
f
1
f
2
, it is clear that every L
i
is of the form as R, in
view of the linear independence of
1
, . . .
p
over Q. Thus, any T (T )
is of the form
[T
1
, T
2
, ...]
f
1
f
2
with T
j
being repeated f
j
times in the direct
2. The commutator-algebra of a R-matrix 11
sum. For xed j, let us denote by (T
j
) the algebra generated over Q by
such rational matrices T
j
and the corresponding abstract algebra by T
j
.
None of the T
j
can be the null-algebra for then we will have R
j
= 0,
contradicting R
2
= E.
Remark. For k 1, it could happen that T
k
and T
1
are isomorphic.
But there cannot exist a nonsingular rational matrix B independent of
T =
[T
1
, T
2
, ...]
f
1
f
2
(T ) such that for k 1, T
k
= B
1
T
1
B for every
T (T ). For, if such a B were to exist then R
k
= B
1
R
1
B for k 1,
which is a contradiction.
Each algebra (T
j
) is necessarily irreducible (since, if (T
j
) were re-
ducible, we would have R
j
necessarily reducible).
By a simple algebra, we mean an irreducible matrix algebra [28].
This denition of a simple algebra can be identied with another of a 15
simple (matrix) algebra as one having no proper two-sided ideals. A
semi-simple algebra is, by denition, a direct sum of simple algebras.
Proposition 5. The algebra (T ) is semi-simple.
Proof. The algebras (T
j
) are simple and if we could show that T is the
direct sum of the algebras T
j
, then our proposition would be proved. For
this, it is sucient to prove that if T
j
(T
j
), then T =
_
0 0 T
1
0
_
f
,...,
1
f
11

f
,...,
1
f
p

(T ) for every l with 1 l p. We might suppose, without loss of


generality that l = 1. Let now, for 1 j p, (N
j
) be the set of T
j
(T
j
)
such that T =
_
,..., , T
j
, 0,..., 0
_
f
1
f
j1
f
j
f
j+1
f
p
is in (T ). It is easy to verify that (N
j
) is
a two-sided ideal in (T
j
). Now (T
j
) being simple, we have (N
j
) = (T
j
)
or (N
j
) is the null-algebra. If (N
1
) = (T
1
), we are through. Otherwise,
let k be the smallest positive integer greater than 1 such that (N
k1
) is
the null-algebra and (N
k
) is the whole of (T
k
). Then necessarily, 2
k p for otherwise (T
p
) will be the null-algebra which is not true, as
we know. We now claim that if T =
_
T
1
,..., T
k1
, T
k
,..., 0
_
f
1
f
k1
f
k
f
p
is in (T ), then
corresponding to T
k
(T
k
), T
k1
in (T
k1
) is uniquely determined. For,
if T =
_
T
1
,..., T
k1
, T
k
, 0,..., 0
_
f
1
f
k1
f
k
f
k+1
f
p
(T ) and M =
_
M
1
,..., M
k1
, T
k
, 0... 0
_
f
1
f
k1
f
k
f
k+1
f
p
(T ),
then T
k1
M
k1
(N
k1
) which is the null-algebra, by denition of 16
k. Thus there is a one-one correspondence T
k
T
k1
between (T
k
)
12 Chapter 1
and (T
k1
) which is actually an algebra isomorphism. But now (T
k
)
and (T
k1
) are irreducible, and therefore there exists a constant non-
singular rational matrix B such that if T =
_
... T
k=1
, T
k
, .
_
f
k1
f
k
(T ) , then
T
k1
= B
1
T
k
B, which is a contradiction to our Remark on p. 11. Then
(N
1
) = (T
1
) and similarly we can show (N
j
) = (T
j
) for every j.
By a theorem on algebras, the commutator algebra of a semi-simple
matrix algebra is again semi-simple. Thus (M) which is the commutator-
algebra of (T ) is semi-simple. (Compare the proof of Theorem 1.4-A,
p.717, [28]). We shall however nd the structure of (M) explicitly, as
follows.
We may write R =
_
R
1
, R
2
,...,
f
1
f
2
_
as
R =
_
R
(1)
, R
(2)
, . . . ,
_
and correspondingly T =
_
T
1
, T
2
...
f
1
f
2
_
(T ) as
T = [T
(1)
, T
(2)
, . . . , ]
Again, we decompose M (M) correspondingly as (M
kl
). From T M =
MT, it follows 17
T
(k)
M
kl
= M
kl
T
(l)
(15)
where T
(k)
, T
(l)
run over all elements of (T
j
k
) and (T
j
l
) respectively.
But now the algebras (T
j
) are irreducible. Therefore applying Schurs
lemma, we see that either M
kl
is the zero matrix or if M
kl
is a square ma-
trix (dierent from the zero matrix), then it is necessarily non-singular.
Let us suppose now that j
k
j
l
. If M
kl
is a square matrix dierent from
zero, then it is necessarily a nonsingular (rational) matrix and this con-
tradicts the remark on p. 1. Thus corresponding to the decomposition
_
R
1
, R
2
...
f
1
f
2
_
of R, the matrix M (M) takes the form [M
1
, M
2
, . . .] where
M
k
= (M
(k)
pq
) and from (15),
T
k
M
(k)
pq
= M
(k)
pq
T
k
(16)
for every T
k
(T
k
). Thus M
(k)
pq
belongs to the commutator algebra of
(T
k
) which we may denote by (L
k
). By Schurs lemma again, since
3. Division algebras over Q with a positive involution 13
(T
k
) is irreducible, we see from (16) that M
(k)
pq
= 0 or it is non-singular.
Thus the matrix algebra (L
k
) is indeed a division-algebra. Conversely,
if M = [M
1
, . . . , M
k
, . . .] where M
k
= (M
(k)
pq
), and M
(k)
pq
(L
k
), then
M (M). Thus (M) is the direct sum (M
1
) + (M
2
) + . . . where (M
k
)
is the complete f
k
-rowed matrix algebra over the division algebra (L
k
).
Each (M
k
) is a simple algebra, since it is the complete matrix algebra 18
over a division algebra. Thus (M) is semi-simple.
Let us remark that one could prove the fact (M) is semi-simple also
directly by making use of the positive involution in (M).
In the direct sum decomposition (M) = (M
1
) + (M
2
) + . . . above,
each (M
k
) is a complete matrix-algebra over a division-algebra (L
k
) and
(M
k
) carries a positive involution which, when restricted to (L
k
) is again
a positive involution. Thus, in our study of the commutator algebra of a
R-matrix, we are nally reduced to the case of division algebras of nite
rank over Q, carrying a positive involution.
3 Division algebras over Q with a positive involu-
tion
We now consider a division algebra (M) with a positive involution, re-
alised as the commutator algebra of a simple (matrix) algebra (T ).
From the theory of algebras, it is known that the commutator algebra
of (M) is precisely (T ).
Regarding the subalgebra (R) = (T ) (M), we have
Proposition 6. The algebra (R) coincides with the centre of (T ) as also
with the centre of (M).
Proof. Let K (R). Then, since K (M), K belongs to the centre of 19
(T ). Conversely, if L is in the centre of (T ), then L (M), by our
remark on commutator algebras above and therefore L (R). Again let
K (R). Then K is in the centre of (M), since K (T ). Conversely, if
L belongs to the centre of (T ), then L (M) clearly.
Since (M) is a division algebra, its centre (R) is a eld which is a
representation of an algebraic number eld R of degree h, say, over Q.
We denote the conjugates of R by R
(1)
(= R), R
(2)
, . . . , R
(h)
. For R,
14 Chapter 1
we denote its conjugates by
(1)
. . . . . . ,
(h)
, and the trace
(1)
+ . . . +

(h)
and norm
(1)
, . . .
(h)
respectively by tr
R/Q
(), N
R/Q
().
The involution M M

of (M), when restricted to (R), gives an


automorphism K K

of R, of order 2. We now distinguish between


the following two cases:
(i) for every element of R,

= .
(ii) there exists at least one

R such that

.
In the case of (i), we say that the involution is of the rst kind and in 20
the case of (ii), we say it is of the second kind.
The positive involution in (R) enables us to characterise the eld R
further, as follows.
Theorem 2. In the case of positive involutions of the rst kind, R is
totally real. In the case of positive involutions of the second kind, R is
a totally complex eld which is an imaginary quadratic extension of a
totally real eld L.
Proof. First, we take the case of a positive involution of the rst kind in
R. To R, there corresponds K (R). Now (KK

) = (K
2
) > 0
for every K in (R), dierent from 0. But (R) is a multiple of the regu-
lar representation of R over Q (upto equivalence) and hence (K
2
) =
m. tr
R/Q
(
2
) where, m is a positive rational integer. Thus for 0 in R,
we have
tr
R/Q
(
2
) 0 (17)
Suppose now R is not totally real; in fact, let R
(1)
and R
(2)
be a pair of
complex conjugates, without loss of generality. Let
1
,
2
, . . . ,
h
be
a basis of R over Q. Then we have, for 1 k h,
(k)
=
h
_
j=1
x
j

(k)
j
with x
j
Q. Now tr
R/Q
(
2
) = F(x
1
, . . . , x
h
) is a quadratic form in
x
1
, . . . , x
h
with coecients in Q and it assumes positive (rational) val-
ues for rational x
1
, . . . , x
h
not all zero. Hence, in the rst place F is 21
nondegenerate since if F is degenerate, there exists a rational column
x
0
= 0 where F
1
is the matrix associated with F(x
1
. . . x
h
) and then
x

0
F
1
x
0
= F(x
(0)
1
, . . . x
(0)
h
) = 0 for rational x
(0)
1
, . . . x
(0)
h
not all zero. By
3. Division algebras over Q with a positive involution 15
continuity, it can be seen that it is, in fact, a positive-denite quadratic
form in x
1
, . . . , x
h
. Since the matrix (
( j)
k
)(1 k, j h) is non-singular
and R
(1)
, R
(2)
are complex elds, it is possible to nd real numbers
x
(0)
1
, . . . , x
(0)
h
such that

(1)
1
x
(0)
1
+ . . . +
(1)
h
x
(0)
h
= i

(2)
1
x
(0)
1
+ . . . +
(2)
h
x
(0)
h
= i

( j)
1
x
(0)
1
+ . . . +
( j)
h
x
(0)
h
= 0 for 2 < j h.
Now F(x
(0)
1
, . . . x
(0)
h
) = i
2
+ (i)
2
+ 0 + . . . + 0 = 2 < 0. We can,
by continuity of F(x
1
, . . . , x
h
) again, nd rational numbers x

1
, . . . , x

h
suciently close to x
(0)
1
, . . . , x
(0)
h
such that F(x

1
, . . . , x

h
) < 0, which is a
contradiction. Thus R is necessarily totally real.
We now take the case of involutions of the second kind. Let L be
the xed eld of the involution, viz. the set of all R such that

= .
Since the involution is of the second kind, there exists
0
R such that

0
; clearly
0
L. We now claim that =
0

0
( 0!) generates
R over L. For =

and
2
= L, 0. An arbitrary R can
be written as
1
2
( +

) +
1
2
(

) = + (say). Obviously , L
and further,
if = + with , L, then

= . (18)
(It is trivial that any + with , L belongs to R). Now if K (R) 22
corresponds to R, then (KK

) > 0 for K 0 implies that tr


R/Q
((+
)( )) = tr
R/Q
(
2

2
) > 0 for all , L not both zero.
(Recall that (KK

) = m. tr
R/Q
(

) for a positive integer m). But we


know that tr
RQ
(

) = tr
L/Q
(tr
R/L
(

)) = 2tr
L/Q
(
2
) 2tr
L/Q
(
2
).
In particular, for 0 in L, tr
L/Q
(
2
) > 0 which implies, by the
foregoing arguments that L is totally real. We now claim that is
necessarily totally negative. For, if one particular conjugate, say
( j)
>
0, then we can nd an element in L such that |
( j)
| is large and |
(k)
|
1 for k j so that tr
L/Q
(
2
) < 0 which gives a contradiction. Thus
R = L(

) i.e. R is an imaginary quadratic extension of the totally


real eld L .
16 Chapter 1
Remark. In the case of positive involutions of the second kind, the in-
volution is uniquely determined by (18), viz. if = +

with ,
L, then

which is just the complex conjugate of .


If the involution is not positive, then it is not uniquely determined, in
general. Take the biquadratic eld generated by

2 and

3 over Q; we
have two distinct involutions,

2

2 and

3

3.
Having known the structure of the centre R of the algebra M, we
wish to remark that the division algebra M can be considered as an al-
gebra even over R, or, in the notation of the theory of algebras, a central
algebra. For, let
1
, . . . ,
h
be a basis of R over Q and let us denote the 23
identity in Mby
1
. Then
1

1
, . . . . . . . . . ,
h

1
are linearly independent
over Q. If there exists
2
in Mlinearly independent of these h elements,
then it is easy to see that
1

1
, . . .
h

1
,
1

2
, . . . ,
h

2
are linearly inde-
pendent over Q. In this way, we can nd
1
,
2
, . . .
m
in M such that

1
, . . . ,
h

1
,
1

2
, . . . ,
h

2
, . . . ,
h

m
form a basis of M over Q and

1
, . . . ,
m
form a basis of M over R. Thus M is a central algebra of
rank m over R. It is known from the theory of algebras that m = s
2
,
where s is a rational integer.
In connection with the problem of determining the division algebras
over Qwith a positive involution, occurring as the complete commutator
algebra of a R-matrix, we shall nd, as a rst step, all division algebras
over Q carrying a positive involution. In view of our remark above, it
clearly suces to nd all central division algebras with a positive in-
volution over a given eld of the type mentioned in Theorem 2. Then,
given one positive involution therein, we shall obtain all positive involu-
tions in the algebra. We shall also examine the possibility of expressing
the given positive involution in the specic form M A
1
M

A (in
the regular representation) with a rational non-singular skew-symmetric
matrix A and then getting from one such A, all other principal matrices
for the same involution.
First we proceed to determine all central division algebras V with a
positive involution, over a given number eld.
Let V be commutative. Then, by Theorem 2, V is either a totally 24
complex or a totally real number eld R, of degree h, say, over Q. Let
R
(1)
, . . . , R
(h)
be the conjugates of R and
1
, . . . ,
h
be a basis of R
3. Division algebras over Q with a positive involution 17
over Q. We now consider the so-called regular representation of R over
Q (relative to
1
, . . . ,
h
). For any R, we have
k
=
h
_
j=1
x
k j

j
,
1 k h, x
k j
Q.

(1)
k

(1)
=
h

j=1
x
k j

(1)
j
, 1 k, 1 h. (19)
Denoting the matrices [
(1)
, . . . ,
(h)
], (
(1)
k
) and (x
k j
) by (), and D
respectively, we rewrite (19) in matrix form as
() = D (19)

The mapping () D gives a faithful and irreducible rational repre-


sentation of R. If
1
, . . . ,
h
are replaced by

1
, . . . ,

h
where
_

1
.
.
.

h
_

_
=
C
_

1
.
.
.

h
_
where C is rational and nonsingular, then we have the equiva-
lent representation () CDC
1
. All irreducible representations of R
are equivalent to the regular representation of R and an arbitrary non-
degenerate representation of R is just a multiple of the same.
The involution

in R is, in view of our remark on p. 16 given


by

=
_

_
, if the involution is of the rst kind
, if it is of the second kind.
Passing to the transpose conjugate in (19)

, we have
(

But (

) = D

. Thus setting F
1
=

, we have 25
D

= F
1
D

F.
Now, observing that the involution commutes with all the isomor-
phisms of R i.e.
(k)
j
= (
j
)
(k)
, we see that F
1
= (tr
R/Q
(
i

j
)) is a
rational matrix but being positive hermitian, is positive symmetric.
18 Chapter 1
The rank of a central (matrix) division algebra V over its centre is
s
2
, where s is a rational integer. For s = 1, V itself is an algebraic
number eld R and we have seen in detail, the structure of R in order
that it might carry a positive involution.
Now suppose s = 2. The algebra V is then a so-called quaternion
algebra over the centre R. Any element V is of the form =
x + yi + z j + tk where i, j, k satisfy the multiplication table given below:
i
2
= a R, j
2
= b R, k
2
= ab, i j = ji = k,
jk = bi = k jki = aj = ik.
It can be veried that if 1, i, j, k are linearly independent over Q and
satisfy the multiplication table above, then they generate an algebra V
of rank 4, with centre R.
When is this algebra a division algebra with a positive involution?
Now, in V , we have the mapping
= x + yi + z j + tk

= x yi z j tk
and it is easy to check that this is an involution of V . Under the regular 26
representation, D where D is given by
_
1
i
j
k
_
(x + yi + z j + tk) = D
_
1
i
j
k
_
i.e. D =
_

_
x y z t
ay x at z
bz bt x y
abt bz ay x
_

_
Now

D =
_

_
x y z t
ay x at z
bz bt x y
abt bz ay x
_

_
. Dening F
1
= [1, a, b, ab] it can be seen
that

D = F
1
D

F. The representation D is not a representation


over Q, but we can get one by replacing each in D by its regular
representation ()
1
over Q.
In order that V is a division algebra, it is necessary and sucient
that the norm of V over R is dierent from zero, for 0. But the
norm of = x + yi + z j + tk over R is just x
2
ay
2
bz
2
+ abt
2
. Hence
the necessary and sucient condition for V to be a division algebra is
3. Division algebras over Q with a positive involution 19
that the quadratic form f (x, y, z, t) = x
2
ay
2
bz
2
+ abt
2
should not
represent 0 non-trivially over R.
We shall now nd conditions under which the involution

in
V is a positive involution.
More generally, let us take a central division algebra M of rank m 27
over its centre R and let

be an involution in M, which is identity


on R. We shall now give the regular representation of M over R. Let

1
, . . . ,
m
be a basis of M over R and
1
, . . . ,
h
be a basis of R over
Q. Then, for any =
m
_
j=1
x
j

j
M, we have
_

1
.
.
.

m
_

_
= D
_

1
.
.
.

m
_

_
(20)
where D = (d
pq
) is an m-rowed square matrix with elements in R. For
getting a rational representation of M, we may proceed as follows. Un-
der the regular representation of R with respect to the basis
1
, . . . ,
h
,
we know that d
pq
D
pq
= [d
(1)
pq
, . . . , d
(h)
pq
]
1
where = (
(1)
g
),
1 g, 1 h. Let us now take as a basis of M over Q, the mh elements

1
, . . . ,
mh
dened by
k+(l1)h
=
k

1
for 1 k h, 1 l m. Then
we have for M,
_

1
.
.
.

mh
_

_
= D
0
_

1
.
.
.

mh
_

_
(21)
where D
0
= (D
pq
)(1 p, q m) is clearly rational. We can get
the relationship between D
0
and D as follows. Suppose, instead of

1
, . . . ,
mh
, we take as a basis of Mthe mh elements
1
, . . . ,
mh
where

1+(k1)m
=
k

1
(1 k h, 1 l m) and suppose V is the mh-rowed
permutation matrix taking (l +(k1)m)
th

row to (k+(l 1)h)


th

row; then
with respect to the new basis, V
1
D
0
V. It is now easy to verify that 28
V
1
D
0
V = ( E
m
)[D
(1)
, . . . , D
(h)
]( E
m
)
1
(22)
where D
(1)
= (d
(1)
pq
) for 1 l h and E
m
denotes the mh-rowed
square matrix (
(1)
j
E
m
), E
m
being the m-rowed identity matrix.
20 Chapter 1
Let =
m
_
j=1
x
j

j
M and D,

under (20). Then


(DD

) = f (x
1
, . . . , x
m
) is a quadratic form in x
1
, . . . x
m
with coe-
cients in R.
Proposition 7. The involution

in M is positive if and only if R


is totally real and f (x
1
, . . . , x
m
) is totally positive-denite (i.e. f (x
1
, . . . ,
x
m
) as well as its conjugates over Q are positive-denite quadratic
forms).
Proof. By denition, the involution is positive, if, for every M we
have (D
0
D

0
) positive for the image D
0
of under (21). Now, by (22),
(D
0
D

0
) =
h
_
j1
(D
( j)
(D

)
( j)
) =
h
_
j=1
(D
( j)
(D
( j)
)

) (dening (D
( j)
)

=
(D

)
( j)
). We have thus
(D
0
D

0
) = tr
R/Q
((DD

))
Thus we should have, in particular, tr
R/Q
(
2
) > 0 for 0 in R. There-
fore R should be totally real, using the arguments in the proof of Theo-
rem 2.
By the foregoing, the involution is positive if and only if tr
R/Q
( f (x
1
,
. . . , x
m
)) > 0 for x
1
, . . . , x
m
in R not all zero. Now, for u 0 in R, we 29
have f (x
1
u, . . . , x
m
u) = u
2
f (x
1
, . . . , x
m
). If, for x
(0)
1
, . . . , x
(0)
m
not all zero,
some conjugate of = f (x
(0)
1
, . . . , x
(0)
m
) is negative, then, by choosing
u R suitably, we can make tr
R/Q
(u
2
) < 0, which is a contradiction.
Moreover, no conjugate of f (x
1
, . . . , x
m
) over Q can be degenerate, for
then there will exist x

1
, . . . , x

m
not all zero in Rsuch that f (x

1
, . . . , x

m
) =
0 and tr
R/Q
( f (x

1
. . . . , x

m
)) = 0. Thus the conjugates of f (x
1
, . . . , x
m
)
are all nondegenerate and represent only totally positive numbers in the
respective conjugates of R which implies that they are positive denite.
Our proposition is thus completely proved.
Going back to the quaternion division algebra V over R, we deduce
that the involution

is positive if and only if R is totally real and
further, the quaternary form x
2
ay
2
bz
2
+ abt
2
is totally positive-
denite; in other words, a, b should both be totally positive numbers
in R.
3. Division algebras over Q with a positive involution 21
If a, b are not both totally positive, then the involution

is
not positive. We shall, in this case, look for other involutions in V which
might be positive. We shall rst nd the relationship between any two
involutions in V , which have the same eect on R.

Theorem 3 (Albert, [1]). Let



and

be two involutions in
a central division algebra M with centre R and let, for R, =

.
Then there exists 0 in Msuch that for M,

=
1

,

= .
30
Proof. The mapping (

), being the composite of two involutions,


is an automorphism of M and further it in identity on R. By a theo-
rem of T. Skolem [23], every automorphism of a central simple algebra
which is identity on its centre, is an inner automorphism of the algebra.
Therefore, there exists 0 in Msuch that
(

) =
1
(23)
i.e.

.
Replacing by

in (23), we get

1
=

.
In other words,
1

commutes with all elements of Mand hence

=
for a R. Further =

=. Since Mis a division algebra, = 1.


Now, suppose that = 1; then

= . If 1, then setting
= + 1, we have C and = ( + 1) = . Further

= =
= . Now

=
1

= ()
1

for 1, we have

=
1
1

1
with

1
=
1
or

1
=
1
.
Let now

be a positive involution of the rst kind in V ; then


R is totally real. We know that the involution

in V is also of the
22 Chapter 1
rst kind. Thus, by Theorem 3, there exists 0 in V with

= ,
such that for V .

=
1

. (24)
If

= , then Rand then

i.e. the involution

coincides 31
with the involution

.
Let us suppose now that

= . We shall construct V such
that

= 0 i.e.
1
= which means + = 0. But
then applying the involution , we have + = 0. This again gives
(+)+(+) = 0. But + R. Therefore + = 0, since 0.
The condition

= implies = . Expressing as x+yi+z j+tk, the


condition = means that x = 0. Further since

= , = pi+qj+rk
with p, q, r R. Thus to nd V such that

= , we have only to
nd numbers y, z, t in Rsatisfying + = 0, i.e. apy+bqzabrt = 0.
But this last equation is a linear equation in three unknowns over the
eld R and therefore admits of innitely many solutions. Thus, there
exists
0
= y
0
i + z
0
j + t
0
k V such that

0
=
0
and
0
0.
We now observe that the involutions

and

related
by (24), with

= , cannot both be positive. For, tr
R/Q
(
0

0
) =
tr
R/Q
(
0

0
). In the case when the involution

is positive, we thus
conclude that no involution

with

=
1

can be positive
unless

= in which case both the involutions coincide.
Let us suppose that the involution

is not positive. Then, in
the rst place, f (x, y, z, t) cannot be totally positive denite and if the 32
involution

(=
1

with

= ) is to be positive, then no
conjugate of f (x, y, z, t) over Q can be negative denite either, since for
0 in Q, h
2
= h. f (, 0, 0, 0) = tr
R/Q
(
2
) = tr
R/Q
(

) must be
positive. Now, we claim that no conjugate of f (x, y, z, t) can be positive-
denite either. For, tr
R/Q
(
0

0
u
2
) must be positive for all u 0 in R, i.e.
tr
R/Q
(f (0, y
0
, z
0
, t
0
)u
2
) must be positive for all u 0 in R. But, now, if
some conjugate of f (x, y, z, t) were positive-denite, we could choose u
suitably so that tr
R/Q
(f (0, y
0
, z
0
, t
0
)u
2
) < 0. We know already that no
conjugate of f (x, y, z, t) can be negative denite. Thus f (x, y, z, t) and all
its conjugates must be indenite, if the involution

(=
1

with

= ) were to be positive. We are thus led to


3. Division algebras over Q with a positive involution 23
Proposition 8. If the quadratic form f (x, y, z, t) = x
2
ay
2
bz
2
+ abt
2
is totally positive-denite, then the only positive involution in V is the
involution

; otherwise, in order that there might exist positive
involutions in V , f (x, y, z, t) should be totally indenite.
In the case when the form x
2
ay
2
bz
2
+ abt
2
is totally indenite,
we remark that by means of a linear transformation in x, y, z, t with co-
ecients in R, it can be brought to the form x
2
a
1
y
2
b
1
z
2
+ a
1
b
1
t
2
where a
1
and b
1
are totally positive. First, we note that the ternary form
(y, z, t) = ay
2
bz
2
+abt
2
is necessarily totally indenite (This is be-
cause the three numbers a
( j)
, b
( j)
, a
( j)
, b
( j)
, 1 j h cannot all be of
the same sign, in view of the fact that a
( j)
and b
( j)
are not both negative). 33
We can nd a linear transformation over R which takes (y, z, t) to the
form y
2
bz
2
+ bt
2
where is any totally negative number repre-
sented by (y, z, t) in R. Again noticing that the binary form bz
2
+bt
2
is totally indenite, we can eventually transform (y, z, t) to the form
a
1
y
2
b
1
z
2
+ a
1
b
1
t
2
where a
1
and b
1
are totally positive. Thus we
could suppose that the totally indenite form x
2
ay
2
bz
2
+ abt
2
has
already the property that a, b are totally positive.
For R, > 0 means is totally positive. Now since a > 0, the
element i =

a generates in V , a real eld R (i). Any = x+yi +z j +tk


can be written as + j with = x + yi and = z + ti. For = a + bi
in R(i), we denote = a bi. Then we have j = j. Then we obtain a
representation of V (as a vector-space over R (i)) given by = + j
D
1
=
_

b
_
;
_
1
j
_
= D
1
_
1
j
_
(25)
Now
D
1
=
_

b
_
= FD
1
F
1
(26)
where F =
_
0 1
b 0
_
corresponds to = j under (25). Further

D
1
=
_

b
_
= JD

1
J
1
(27)
where

D
1
corresponds under (25) to

= x yi z j tk = j and 34
24 Chapter 1
J =
_
0 1
1 0
_
. The regular representation D of V may be seen to be
equivalent to the representation
_
D
1
0
0 D
1
_
as below, viz.
D = W
2
_
D
1
0
0 D
1
_

1
2
W
1
(28)
where
2
=
_
E
2
E
2
iE
2
iE
2
_
, E
2
is the 2-rowed identity and W is a permutation
matrix.
Let

=
1
1

1
with

1
=
1
be an involution in V and let

1
,



D
1
and
1
L
1
under (25). Now D

1
= L
1
1

D
1
L
1
where
L
1
=
_

b
_
=

L
1
= JL

1
J
1
, by (27). Thus setting F
1
= J
1
L
1
,
we see F
1
is real symmetric which is equivalent to saying that = .
Further
D

1
= F
1
1
D

1
F
1
(29)
Now (DD

) = 2(D
1
D

1
), in view of (28) and (26). Hence, for the
involution

to be positive, the quadratic form (D


1
D

1
) should
be totally positive-denite over R. This again implies, by (29), that
(D
1
F
1
1
D

1
F
1
) should be totally positive. A necessary and sucient
condition for this is given by
Lemma 1. Let X = (x
kl
), 1 k g, 1 l h, be a real matrix and
let P, Q be real square matrices of h and g rows respectively. Then the
quadratic form (XPX

Q) in x
kl
is positive-denite if and only if P and
Q are both positive-denite or both negative-denite.
Proof. There exist real non-singular matrices C and B such that BPB

= 35
[p
1
, . . . , p
h
] and C

QC = [q
1
, . . . , q
g
]. Replacing X by CXB, we can
suppose that already P and Q are in the diagonal form. Now
(XPX

Q) =
g
_
k=1
h
_
l=1
p
l
q
k
x
2
kl
is positive-denite if and only if p
l
q
k
are
all positive. Thus either p
1
, . . . , p
h
, q
1
, . . . , q
g
are all positive or all neg-
ative. In other words, the necessary and sucient condition is that P, Q
should be both positive-denite or both negative-denite.
The passage from (= x + yi), (= z + ti), , b to x, y, z, t is a
nonsingular real linear transformation and we can thus look upon the
elements of D
1
as independent variables. Thus taking D
1
for X in lemma
3. Division algebras over Q with a positive involution 25
1, the criterion for (D
1
F
1
1
D

1
F
1
) to be totally positive is that each
conjugate of F
1
is either positive-denite or negative-denite. Now we
can nd R such that the conjugates of have prescribed signs and
if we choose
1
instead of
1
, then we get F
1
instead of F
1
. Thus,
without changing the involution, we might require that F
1
=
_
b

_
is totally positive-denite. Now = and therefore = pi with p R
and let = q + ri. The conditions for F
1
to be totally positive are
> 0, b > 0, b > 0
But b > 0 and therefore, these may be rewritten as
> 0, > 0, + = 2q > 0, p
2
a b(q
2
ar
2
) > 0
It is easy to check that all these conditions can be compressed as
q > 0, b(q
2
ar
2
) > ap
2
(30)
It is possible to nd p, q, r in R satisfying (30) (for example, take q = 1, 36
p = r = 0) and therefore, the existence in V of positive involutions

(=
1
1

1
) with

1
=
1
is assured.
Let now
1

with

= V be another positive involution


(of the rst kind). Setting =
1
1
, we have
1

=
1

. Now,

1
)
1
= (
1
1

1
)(
1
1

1
)
1
= . Conversely, if =

,
then =
1
satises

= . Thus all such involutions
1

are connected with

by
1

=
1

for a V satisfying

= .
Suppose
1
k

k
, k = 1, 2 are two positive involutions of V ,
with

k
=
k
. If
k
L
k
under 25, then L
2
= L
1
R
1
where R
1
corresponds to =
1
1

2
. Since

= by the foregoing, we have


R

1
= R
1
. Then F
k
= J
1
L
k
(k = 1, 2) should be totally positive. Further
F
2
= F
1
R
1
. Conversely, given R
1
= R

1
such that F
2
= F
1
R
1
is totally
positive, then the element
2
V corresponding to L
2
= L
1
R
1
under
(25), given a positive involution
1
z

2
in V .
Lemma 2. If F is a real m-rowed positive-denite matrix and R is a
real matrix such that FR is symmetric, then FR is positive-denite if
and only if all the eigenvalues of R are positive.
26 Chapter 1
Proof. Since F is positive-denite and FR = R

, we can nd real non-


singular C such that F = C

C and FR = C

BC with B = [b
1
, . . . , b
m
].
Then C

BC = C

CR i.e. B = CRC
1
. Thus the eigenvalues of B and R 37
are the same. Our lemma easily follows.
Choosing the rational representation D
0
given by (21), we may
conclude as follows. If D
0
D

0
is a positive involution of the rst kind
in V , then all other positive involutions can be obtained in the form
D
0
R
1
0
D

0
R
0
where R
0
= R

0
in V and further all the eigen-values of
R
0
are positive. The quadratic form x
2
ay
2
bz
2
+abt
2
is either totally
denite or totally indenite over R.
From (29) and (28), it can be veried that
D

= F
1
D

F (31)
where F = W
1

1
2
_
F
1
0
0 F
1
_

1
2
W
1
. Now
F
1
= W
2
_
F
1
1
0
0 F
1
1
_

2
W

= W
2
_
L
1
1
0
0 L
1
1
_
(W
2
)
1
W
2

_
J 0
0 J
_
(W
2
)
1
W
2

2
W

.
Further since W
2
_
L
1
1
0
0 L
1
1
_

1
W
1
2
corresponds to
1
1
under the regu-
lar representation of V , it is a matrix with elements in R; moreover it
is easy to verify that the matrices W
2
_
J 0
0 J
_
(W
2
)
1
and W
2

2
W

have again their elements in R. Thus F


1
has elements in R and more-
over being a transform of the totally positive matrix
_
F
1
1
0
0 F
1
1
_
is itself
totally positive over R. Going to the rational representation D
0
38
again, we have, from (31), that
D

0
= F
1
0
D

0
F
0
(32)
where F
0
is a rational positive symmetric matrix. The relation (32) is
analogous to what we obtained on p. 25 for the case of elds, in terms
of the regular representation over Q.
An important theorem due to Albert (p.161, [1]) says that any divi-
sion algebra over Q admitting an involution of the rst kind is either an
4. Cyclic algebras 27
algebraic number eld R or a quaternion division algebra over R. We
have discussed precisely for these two cases, all the involutions of the
rst kind. We may now proceed to study division algebras carrying in-
volutions of the second kind. Such algebras, again, have been studied
by Albert [1].
We have, in this connection, to deal with an important class of alge-
bras called cyclic algebras rst introduced by L.E. Dickson in 1906.
4 Cyclic algebras
39
Let Z be a cyclic extension of degree s(> 1) over an algebraic number
eld R of degree h over Q. Let ,
2
, . . .
s1
,
s
(=identity) be the dis-
tinct automorphisms of Z over R. For Z, we denote by
(r)
, the
eect of
r
on ; particular,
(s)
= =
(0)
.
Let M be the set of elements =
0
+
1
j + . . . +
x1
j
s1
where

0
,
1
, . . . ,
x1
are in Z and j satises
j =
(1)
j (33)
for Z. By iteration, we get from (33) 40
j
k

(1)
=
(k+l)
j
k
.
This relation may be seen to be valid for all rational integers k 0 and
l, dening j
0
= 1. In particular,
j
s
=
(s)
j
s
= j
s
.
We now stipulate that 1, j, j
2
, , j
s1
are linearly independent over Z and
j
s
= b (34)
for some b( 0) R. Under conditions (33) and (34), it can be veried
that M is an algebra of rank s
2
over its centre R. A central algebra M
over R, constructed as above with an auxiliary cyclic extension Z of R
is called a cyclic algebra. The eld Z is called a splitting eld for M.
The quaternion algebra is a special case of a cyclic algebra, when s = 2.
28 Chapter 1
It is known that every cyclic algebra is a simple algebra. Conversely,
by a theorem of Brauer-Hasse-Noether [7], every simple algebra over Q
can be realised as a cyclic algebra over its centre.
For =
0
+
1
j + . . . +
s1
j
s1
M, we have the representation
D of Min Z given by
_
1
j

j
s1
_
= D
_
1
j

j
s1
_
where
D =
_

0

1
...
s1
b
(1)
s1

(1)
0
...
(1)
s2


b
(s1)
1
b
(s1)
2
...
(s1)
0
_

_
(35)
Let us observe that all the terms below the diagonal of D involve b. The 41
regular representation of Mover R is given by
( E
s
)[D
(1)
, . . . D
(s)
]( E
s
)
1
where = (
(1)
k
), with
1
. . .
s
being a basis of Z over R and for D =
(d
pq
), D
(i)
= (d
(i)
pq
).
The algebra M is a division algebra if and only if |D| 0 for every
0 in M. For, we know M is a simple algebra containing 1 and the
condition |D| 0 for 0 would imply that M is free from divisors of
zero and therefore, the ideal generated by any 0 would be the whole
of M. Conversely, if M is a division algebra and 0 in M, it is trivial
to see that |D| 0.
Writing every
k
Z as
s
_
l=1
x
kl

l
, we see that corresponding to
=
s1
_
k=0

k
j
k
, |D| is a homogeneous form f (. . . , x
kl
, . . .) of degree s in
the variables x
kl
, with coecients in R. The necessary and sucient
condition for M to be a division algebra may thus be reformulated as
follows, viz. the form f (. . . x
kl
, . . .) should not represent 0 nontrivially 42
over R.
In the case of the quaternion algebra V over R(s = 2), for = + j,
= x + yi, = z + ti, we have |D| = b = f (x, y, z, t) = x
2

ay
2
bz
2
+ abt
2
. We know that V is a division algebra if and only if
4. Cyclic algebras 29
f (x, y, z, t) does not represent zero nontrivially in R. Clearly, for = 0,
|D| = x
2
ay
2
0, since a is not a square in R. We may then suppose
that, for given = + j in V 0. The condition that |D| = 0 is
equivalent to the fact that b is the norm of an element
1
in R(i) over
R. Thus the quaternion algebra V is a division algebra if and only if
b is not the norm of any element of R(i). In the case s > 2, we shall
nd conditions analogous to this, which shall be sucient for the cyclic
algebra Mto be a division algebra.
Theorem 4 (Wedderburn, [24]). Let Mbe a cyclic algebra constructed
as above with 1, j, . . . , j
s1
as basis over the splitting eld Z and let j
s
=
b belong to the centre R. If for every integer r satisfying 0 < r s1, b
r
is not the norm of an element of Z over R then Mis a division algebra.
Proof. Let =
0
+
1
j +. . . +
k
j
k
where 0 k s 1, be an arbitrary
element of M. If k = 0 and 0, then, trivially, has an inverse. Let
then k > 0 and let us suppose
k
0. We may, in fact, assume that

k
= 1, without loss of generality.
We shall rst nd
0
,
1
, . . . ,
sk
Z such that (
0
+
1
j + . . . +

sk
j
sk
)(
0
+
1
j + . . . + j
k
) is of the form
0
+
1
j + . . . +
k1
j
k1
and is dierent from 0. By iteration of this process, we can eventually
obtain an inverse for , under the hypotheses of the theorem. Now,
(
0
+
1
j + . . . +
sk
j
k
) =
0
+
1
j + . . . +
s1
j
s1
where

0
=
0

0
+
sk
b

1
=
0

1
+
1

(1)
0

k1
=
0

k1
+
1

(1)
k2
+
2

(2)
k3
+ +
k1

(k1)
0

k
=
0
+
1

(1)
k1
+ +
k

(k)
0

k+1
=
1
+
2

(1)
k1
+
. . . . . . . . . . . .

s1
=
sk1
+
sk

(sk)
k1
Taking
sk
= 1, we can nd
sk1
,
sk2
, . . . ,
0
inductively such that

s1
= 0,
s2
= 0, . . . ,
k
= 0.
30 Chapter 1
If, now, it turns out that
0
=
1
= =
k1
= 0, we shall see that
we arrive at a contradiction to the hypotheses.
Replacing j by j
0
where, analogous to (33) and (34), j
0
satises
j
0
=
(1)
j
0
for Z
1, j
0
j
s1
0
are linearly independent over Z, and
j
s
0
= x (an indeterminate),
we can verify easily that 43
(
0
+
1
j
0
+ + j
sk
0
)(
0
+
1
j
0
+ + j
k
0
) =
0

0
+ j
s
0
= x b. (36)
Now, for any =
0
+
1
j
0
+ +
s1
j
s1
0
with
0
,
1
, . . . ,
s1
in Z,
we have
_

_
1
j
0
.
.
.
j
s1
0
_

_
= M
_

_
1
j
0
.
.
.
j
s1
0
_

_
(37)
where M =
_
A B
C D
_
, C is a k-rowed square matrix with x on the diagonal
and the factor x only up to the rst power below and zeros above, B is
a (s k)-rowed square matrix with 1 on the diagonal and zeros above
and further the matrices A, B, D are free from x. Let M
1
, M
2
correspond
respectively to
0
+
1
j
0
+ + j
sk
0
and
0
+
1
j
0
+ + j
k
0
under (37).
Further noting that
_

_
1
j
0
.
.
.
j
s1
0
_

_
(x b) = (x b)E
s
_

_
1
j
0
.
.
.
j
s1
0
_

_
where E
s
is the s-rowed identity matrix, we have, from (36), (37) by
taking determinants, that
|M
1
||M
2
| = |(x b)E| = (x b)
s
.
But, by using Laplaces expansion of the determinant of M
2
along k-
rowed minors of the rst k columns of M
2
, we observe that
|M
2
| = (1)
sk
x
k
+ + N(
0
)
4. Cyclic algebras 31
where N(
0
) is the norm of
0
over R. Since |M
2
| divides the polyno-
mial (x b)
s
, it follows that it is necessarily equal to (1)
sk
(x b)
k
. 44
Comparing the constant terms, we have
N(
0
) = (b)
k
(1)
sk
i.e. b
k
= N(
0
)
which is a contradiction to the hypothesis that no b
r
(0 < r < s) is the
norm of an element of Z. Our theorem is therefore proved.
Remark 1. In the hypotheses of Theorem 4, it is sucient to require
that for divisors t of s satisfying 1 t < s, b
r
shall not be the norm of
any element of Z over R. For, let 0 < r < s and t = g c d of r and
s. Further, let b
r
= N() for Z. Now there exist rational integers
p, q such that pr + qs = t. Then b
t
= N(
p
b
q
). In particular, if s is
a prime, then all these (s 1) conditions reduce to the single condition
that b should not be the norm of an element in Z. In this case, M
is a division algebra. Conversely, as we shall see presently, if M is a
cyclic division algebra over Z with s, a prime, then necessarily b cannot
be the norm of any element of Z over R. It has been shown by Hasse
[9] that the conditions on b in Theorem 4 are also necessary for M to
be a division algebra. The proof by Hasse involves the use of factor
systems in the theory of algebras. We give, in simple cases, a proof of
the necessity of Wedderburns conditions.
Proposition 9. With the notation of Theorem 4, let for a divisor r of s,
b
r
= N() for Z and let r and
s
r
be coprime. Then M cannot be a
division algebra.
Proof. It is sucient to show that Mcontains divisors of zero, under the 45
given conditions.
Let s
1
=
s
r
and let Z
s
1
be the xed eld of the group of the automor-
phisms 1,
s
1
, . . . ,
(r1)s
1
of Z over R. Then the set M
s1
of elements of
the form =
0
+
1
j
r
+
2
j
2r
+ +
s
1
j
(s
1
1)r
1
with
i
Z
s
1
is again
an algebra. Now M
s
1
Mand we shall show that M
s
1
contains divisors
of zero.
32 Chapter 1
The element =
s
1

(rs
1
r)
lies in
s
1
and further b
r
is the norm
of in
s
1
over R. Moreover if j
0
= j
r
2

1
, then 1, j
0
, . . . , j
s
1
1
0
is a
basis of M
s
1
over Z
s
1
and j
0
satises a minimum polynomial equation
of degree s
1
. Now
j
s
1
0
= b
r
(N
Z
s
1/R
())
1
= 1
This gives us a factorization of 0 in M
s
1
, viz.
0 = ( j
0
1)( j
s
1
1
0
+ j
s
1
2
0
+ + 1)
and neither of the factors can be zero, since the minimum polynomial of
j
0
is of degree s
1
.
Corollary . If s is a product of distinct primes, then the conditions of
Wedderburn in Theorem 4 are also necessary for M to be a division
algebra.
5 Division algebras over Q with involutions of the
second kind
Let V be a division algebra over Q. Then by a theorem due to Brauer-
Hasse-Noether, it is known that V is a cyclic algebra over its centre R, 46
with a certain cyclic extension over R as splitting eld.
Conditions necessary and sucient for V to have an involution of
the second kind have been given by Albert [1]. First, let

be such
an involution in V . If L is the xed eld of the involution contained in
the centre R of V , then R = L(c) is a quadratic extension over L, with
a suitable c in R satisfyingc = c. In this case, Albert has shown (Chap
X, [1]) that one can nd a cyclic extension
0
= L() of degree s over
L such that
i) the involution is identity on Z
0
, and
ii) the algebra V is a cyclic algebra having for its
5. Division algebras over Q... 33
splitting eld the eld Z = Z
0
(c) = L(c, ) which is abelian of degree
2s over L.
Following our earlier notation, let 1, j, j
2
, . . . , j
s1
generate V over
Z and let j
2
= b R. Now we claim that

j j commutes with all elements


of Z. For, rst of all, any Z is of the form =
0
+
1
c with
0
,
1

K
0
and

=
0

1
c . Hence the mapping

is an automorphism of
Z. Denote by the generating automorphism of Z over R and by
(1)
,
the eect of
1
on Z. Using the fact that Z is abelian over L, we
have, for =
0
+
1
c (with
0
,
1
K
0
),

(l)
= (

(l)
0
+
(l)
1
c) =
(l)
0

(l)
1
c = (

)
(1)
(38)
Now, for Z, we have

j =

j =

(1)
j =

j

(1)
=

j( )
(1)
(39)
and therefore, for Z, we obtain 47

j j =

j
(1)
j =

j j
using (39) with = . Now Z is a maximal commutative system in V
and it follows immediately that

j j = a Z. (40)
Moreover

j j =

j j and therefore a Z
0
, from (38). Now j
s
= b and

j
s
=

b and b

b =

j
s
j
s
= aa
(1)
a
(s1)
. Thus we arrive at the important
condition
N
R/L
(b) = N
Z
0
/L
(a). (41)
(See Theorem 18, p.160, [1])
Conversely, if V is a cyclic algebra generated by 1, j, j
2
, . . . j
s1
over
its splitting eld Z and if Z is realisable as a eld L(c, ) as above and
further, if j
s
= b in R satises (41) for a suitable a L(), then we can
dene an involution of the second kind in V as follows. For =
0
+
1
c
in Z with
0
,
1
Z
0
, we have only to dene

=
0

1
c
34 Chapter 1

j = aj
1
(42)
Extending (42) to all elements of V in the obvious way, we have an
involution of the second kind.
We shall now show that Wedderburns conditions sucient for a
cyclic algebra V to be a division algebra are not incompatible with con-
dition (41) which is necessary and sucient for a cyclic (division) alge-
bra to carry an involution of the second kind. 48
Let us take L = Q, c =

1, R = Q(

1), p an odd prime and


, a primitive p
th
root of unity. The eld Z
0
= Q() with = +
1
is cyclic of degree s =
1
2
(p 1) over L. If now q is a prime q 1(
mod 4), then q = for in R, since
_
1
q
_
= 1. Let us further suppose
that q is a primitive root modulo p (There exist innitely many such
q). If now Z = R(), it is clear that the integral ideal () generated by
in Z is prime; similarly () is prime in Z and () (). Now let us
dene b =
s1
. Then N
R/L
(b) = N
Z
0
/L
(q). Moreover, we claim
that for 0 < r < s, b
r

(1)

(s1)
for Z
0
. For, otherwise, let
b
r
=
(1)

(s1)
for
0
and let =
t
where in the prime factor
decomposition of (), () does not occur. Now
(1)
=
t

(l)
and therefore

r

(s1)r
=
st

(1)

(s1)
As a consequence r = st, which is a contradiction, since 0 < r < s and t
is a rational integer.
Thus the cyclic algebra generated by 1, j, . . . , j
s1
over Z as splitting
eld (where j
s
= b) is, in fact, a division algebra with an involution of
the second kind.
Example. p = 7, s =
p 1
2
= 3, q = 17 = (4 + i)(4 i), = 4 + i,
b = (4 + i)(4 i)
2
, a = 17, j
3
= b, Z = Q
_
cos
2
7
, i
_
.
Let D be the representation of the division algebra over its 49
splitting eld, where D is given by (35). Under this representation, we
have
j F =
_
0 E
s1
b 0
_
, E
s1
being the (s 1)-rowed identity matrix
5. Division algebras over Q... 35
and for Z,
[,
(1)
, . . .
(s1)
]
Let now

be an involution of the second kind in V and let

D
correspond to

under the representation above. The restriction of the
involution in V to Z is an automorphism of Z. Let us denote for
any matrix M = (m
kl
) with m
kl
Z, the matrix (m
kl
) by M. Then the
connection between D and

D is given by
Proposition 10. There exists an s-rowed nonsingular symmetric matrix
F with elements in Z
0
such that, for any V , we have

D = F
1
D

F (43)
Proof. Since V has an involution of the second kind, we have, by (41)
an element a Z such that
bb = aa
(1)
a
(s1)
(44)
Now to

j = aj
1
corresponds

F = [a, a
(1)
, . . . a
(s1)
]
_
0 b
1
E
s1
0
_
We shall
nd elements x
0
, x
1
, . . . , x
s1
in Z
0
dierent from zero, such that
[x
0
, x
1
, . . . , x
s1
]
_
0 b
E
s1
0
_
[x
0
, x
1
, . . . , x
s1
]
1
=

F =
_
0
a
b
[a
(1)
,...,a
(s1)
] 0
_
This matrix equation is equivalent to the conditions 50
x
1
x
0
= a
(1)
, . . . ,
x
s1
x
s2
= a
(s1)
,
x
0
b
x
s1
=
a
b
(45)
If we set for 1 i s 1, x
i
= aa
(1)
a
(i)
and x
0
= a, then they
satisfy (45) and the last condition in (45) is nothing but (44). Thus if we
set F = [a
1
, (aa
(1)
)
1
, . . . , (aa
(1)
a
(s1)
)
1
] then

F = F
1
F

F and by
iteration, we have

F
r
= F
1
F
r
F (46)
For Z, [,
(1)
, . . . ,
(s1)
] and it is trivial to verify that
[

(1)
, . . . ,

(s1)
] = [,
(1)
, . . . ,
(s1)
] = F
1
[,
(1)
, . . . ,
(s1)
]

F
(47)
36 Chapter 1
From (46) and (47), follows (43) for any V . Let us note that F itself
does not correspond in general to an element of V under the represen-
tation D.
The relationship (43) between

D and D will be useful in examining
the positivity of the involution

. Our next object will be to nd all


involutions of the second kind in V and to investigate the existence of a
positive involution. Results in this direction are again due to Albert [1].
If

is any other involution in V having the same eect on


R as the involution

, then we know that, for a 0 in V with

= ,

=
1

. Since the involutions are of the second kind, we can 51


suppose without loss of generality that

= +, by taking, if necessary
c instead of . Now if L under the representation D, then
this means that D

= L
1

DL = L
1
F
1
D

FL. Setting G = FL, we have


from

L = L that G = G

. Thus we have
D

= G
1
D

G, G = FL = G

. (48)

6 Positive involutions of the second kind in division


algebras
Let

be an involution of the second kind in a division algebra V
over Q, with centre R Q. Then we know from5 that V has a splitting
eld Z which can be realised as an abelian extension L(c, ) where L
is the xed eld of the involution in R, Z
0
= L() is cyclic of degree s
over L and c =

d = c R for an element d L.
For the involution

to be positive, we should have necessarily


that L is totally real and d > 0; thus R should be a totally complex
quadratic extension of the totally real eld L. For Z,

is just the
complex conjugate of . Further
0
is totally real and the involution is
identity on Z
0
.
From the representation D of V given by (35) we rst get a
representation D
0
of V over Rby taking D
0
= (E
s
)[D, D
(1)
, . . . ,
D
(s1)
]( E
s
)
1
where if D = (d
pq
), D
(k)
=
_
d
(k)
pq
_
(1 k s 1), E
s
6. Positive involutions of the second kind in division algebras 37
is the s-rowed identity and =
_

(1)
k
_
(1 k, 1 s),
1
, . . . ,
s
being
a basis of Z
0
over L and serving also as a basis of Z over R. Now let 52

1
, . . .
h
be a basis of R over Q and let

=
_

(q)
p
_
(1 p, q h).
If D
0
= (
kl
), denote by D
0i
the corresponding matrix
_

(i)
kl
_
for 1
i h. Then setting D = (

E
s
2 )[D
01
, . . . , D
oh
](

E
s
2 )
1
where
E
s
2 is the s
2
-rowed identity matrix, we see that the mapping D is
a representation of V over Q by hs
2
-rowed matrices. Throughout this
section, we shall denote by (V ), the image of V under the representation
D over Z.
Let us dene, analogously, F
0
by F
1
0
= ( E
s
) [F, F
(1)
, . . . ,
F
(s1)
]( E
s
)

and denote by F
0i
(1 i h) the matrix
_
f
(i)
kl
_
corre-
sponding to F
0
= ( f
kl
). Introducing F by the denition F
1
= (


E
s
2 )[F
1
01
, . . . , F
1
oh
](

E
s
2 )

, we see that F is a hs
2
-rowed rational
symmetric matrix and the relation (43) in terms of D and F goes over
into

D = F
1
D

F, F = F

(49)
Dening G = F L, we see that (48) goes over into
D

= G
1
D

G with G = F L = G

(50)
For the involution

to be positive we must require that for


0, (D D

) = (D G
1
D

G) > 0. Now (D D

) =
h
_
i=1
(D
0i
D

0i
) =
tr
R/Q
((D
0
D

0
)) (By dening (D
0i
)

= D

oi
). Further (D
0
D

0
) =
s(DD

) by using the fact that D


(1)
= FDF
1
_
whereF =
_
0 E
s1
b 0
__
and hence, by iteration,
D
(k)
= F
k
DF
k
(51)
Now (DD

) = (D G
1
D

G) = (GD

G
1
D) = (DD

) and there- 53
fore for D V , (D D

) is real. The elements x


kl
of D are linearly in-
dependent over R and looking upon them as independent complex vari-
ables, we see that (DD

) is a hermitian form f ( , . . . , x
kl
. , x
kl
, . ) in the
s
2
complex variables x
kl
. On the other hand, by using the arguments of
Proposition 7 the necessary and sucient condition for tr
R/Q
((D
0
D

0
))
38 Chapter 1
to be positive is that (D
0
D

0
) = s(DG
1
G

G) should be a totally
positive-denite hermitian form. Analogously to Lemma 2, the neces-
sary and sucient condition for this may be seen to be that the hermitian
matrix G must be totally positive-Denite over Z. We have thus proved
Proposition 11. In terms of the representation D of V over Z, any
positive involution of the second kind in (V ) is of the form D G
1
D

G
where G = FL is totally positive-denite hermitian and L = F
1
L

F
corresponds to a 0 in V .
In particular, for the involution D

D = F
1
D

F to be positive, the
necessary and sucient condition is that F = [a, aa
(1)
, . . . aa
(1)
a
(s1)
]
is totally positive-denite i.e. a > 0.
Suppose a is not totally positive i.e. the involution

is not
positive. Then we claim that the involution

in V dened by
j

=
1

j and

for Z is positive for suitably chosen in Z


0
.
In fact, if we set =

(s1)

, then for V , we have D

= G
1
D

G 54
where
G
1
= [a, aa
(1)

(1)
, . . . , aa
(1)
. . . a
(s1)

(1)
. .
(s1)
].
Now G = G

, N
Z
0
/L
() = 1 and we have only to choose in Z
0
such
that G is totally positive. But we see that
G
1
=
_
a

(s1)

, aa
(1)

(s1)

(1)
, . . . , aa
(1)
a
(s1)

(s1)

(s1)
_
.
Certainly we can nd Z
0
such that the numbers
a

,
aa
(1)

(1)
, . . . ,
aa
(1)
. . a
(s1)

(s1)
(52)
are all positive, since this merely involves choosing Z
0
such that
,
(1)
, . . . ,
(s1)
have prescribed signs. Further this entails that
(s1)
>
0 , since by (44), aa
(1)
. . . a
(s1)
= bb > 0. Multiplying all the numbers
in (52) by
(s1)
> 0, we see that the numbers a, a(a)
(1)
, a(a)
(1)

(a)
(s1)
are positive and hence G is positive-denite. In a similar way,
by properly choosing the signs of the other conjugates of over Q, we
6. Positive involutions of the second kind in division algebras 39
can actually ensure that a > 0 and hence G is totally positive-denite.
Thus the existence of positive involutions of the second kind in V is
ensured.
We have seen that any positive involution in (V ) (with L as the
xed eld in R) is given by D G
1
D

G where G = FL is totally
positive-denite hermitian and L =

L = F
1
L

F (V ). We shall now
nd that the real dimension of the linear closure C of the corresponding
G in the space of (hs
2
)-rowed real square matrices is gs
2
, where g =
h
2
.
For, L is equivalent over the eld of complex numbers to [L
01
, . . . , L
0h
]
and L
01
is equivalent to [L, L
(1)
, . . . , L
(s1)
]. From (51), we know that
L, L
(1)
, . . . , L
(s1)
are all equivalent to one another. Looking at the form 55
of a general L in (V ), we see that its elements are linearly independent
over R and are of the form +

d where , are in Z
0
. Pairing o
the h conjugates of R over Q as R
(1)
, R
(2)
(= R
(1)
), . . . , R
(h1)
, R
(h)
(=
R
(h1)
), we observe that L
02
= L
01
, . . . , L
0h
= L
0(h1)
. Expressing ,
in terms of a basis
1
, . . . ,
s
of Z
0
over L, we can thus conclude that
the complex dimension of the linear closure of L and hence of G = F L
is gs
2
. The condition G = G

means that the real dimension of C is


precisely gs
2
; the positivity of G is expressed in terms of a nite number
of inequalities. Using the fact that the rational numbers are dense in the
reals, we can nd L (V ) such that the corresponding G = F L is
suciently close to an element of C and to secure G = G

, we have
only to take
1
2
(L +

L) instead of L.
We shall, without risk of confusion, denote till the end of this sec-
tion, the rational representations D, F, L, G etc. by D, F, L, G etc. re-
spectively. Let D D

be a positive involution in (V ); then D

=
G
1
D

G with rational G = G

> 0. Now, any other positive involution


in (V ) is of the form D L
1
D

L where L = L

is in (V ) and further
GL is positive symmetric. By using Lemma 2, this is equivalent to say-
ing that the eigenvalues of L are real and positive. Such an element L in
(V ) may be called a positive element in (V ). A nice characterisation of
positive elements is given by the following.
Proposition 12 (Albert [6]). Given a positive involution D D

of
(V ), any other positive involution in (V ) is of the form D L
1
D

L 56
40 Chapter 1
where L =
p
_
k=1
L
k
L

k
with L
k
in (V ) not all equal to 0.
Proof. First, let, for L (V ), D L
1
D

L be a positive involution.
Then, from above, we know that all the eigenvalues of L are real and
positive. Let r be a root of the characteristic equation |(xE L)| = 0.
Then r is a totally positive algebraic number and let F be the eld
generated by r over Q. By a theorem of Siegel [19], r = r
2
1
+r
2
2
+r
2
3
+r
2
4
,
where, for 1 k 4, r
k
=
N1
_
l=0
a
kl
r
l
, (a
kl
Q) and N is the degree of
F over Q. Denoting, for 1 k 4, the polynomial
N1
_
l=0
a
kl
t
l
by p
k
(t)
and p
2
1
(t) + + p
2
4
(t) t by p(t), we see that p(r) = 0. Since r is an
eigenvalue of L, p(r) = 0 is an eigenvalue of p(L). But p(L), being
an element of the division-algebra (V ), must consequently be 0. i.e.
L = L
2
1
+ L
2
2
+ L
2
3
+ L
2
4
where L
k
= p
k
(L)(1 k 4) are in (V ). Now
L = L

implies that L

k
= L
k
i.e.
L = L
1
L

1
+ + L
4
L

4
.
Clearly at least one L
k
is dierent from 0.
Conversely, let, in fact, L =
p
_
k=1
L
k
L

k
0 with L
k
(V ). Then
we claim that the mapping D L
1
D

L is a positive involution of
(V). That it is an involution is clear. What remains to be shown is that
(DL
1
D

L) > 0 for D 0 in (V ). But now


(DL
1
D

L) = (DL
1
LL

1
D

L)(since L = L

)
= (D
1
LD

1
L)(setting D
1
= DL
1
)
=
p

k,l1
(D
1
L
k
L

k
D

1
L
l
L

l
)
=

k,l
(L

l
D
1
L
k
(L

l
D
1
L
k
)

).
Since L 0, at least one L
k
0 and hence at least one L

k
D
1
L
k
0 in 57
(V ) and by the positivity of the involution D D

, we see that the new


involution is also positive.
7. Existence of R-matrices with given commutator-algebra 41
7 Existence of R-matrices with given commutator-
algebra
Let V be a division algebra over Q with an involution and let (M) be a
rational representation of V . Then (M) is equivalent to a multiple, say
q times, of the regular representation (V ) of V over Q. If M (M),
then we can suppose M = [D, . . . D
q times
] = D
q
(abbreviating [G, . . . G
q times
] as G
q
).
The involution

in V can be described as M F
1
q
M

F
q
where
F
q
is rational symmetric and M (M). In terms of (M), the involution

(for 0 in V ) is described as M M

= G
q
1
M

G
q
where G
q
= F
q
L
q
, L
q
=

L
q
(M) and G
q
= G

q
. If the involution

is
positive, then G is positive.
In connection with the existence of an R-matrix with the property
that RM = MR for every M (M), we shall rst look for a rational
nonsingular skew-symmetric matrix A such that for all M (M), we
have
M

= A
1
M

A (53)
and then ask for all A for which (53) is true. But since M

= G
1
q
M

G
q
, 58
(53) gives AG
q
1
M

= M

AG
1
q
i.e. (AG
1
q
)

(F) , the commutator-


algebra of (M). Setting T
0
= (G
1
q
)

A, we see that G

q
T
0
= A = A

=
T

0
G
q
i.e.
T
0
= G
1
q
T

0
G
q
(54)
Now, for T (F), we can show that G
1
q
T

G
q
(F) and the mapping
T G
1
q
T

G
q
is, in fact, an involution of (F). Actually, for T (F),

T = F
1
q
T

F
q
= L
q
G
1
q
T

G
q
L
1
q
= G
1
q
T

G
q
since elements of (F) commute with L
q
. Thus all the involution in (V )
induce the same involution T

T in (F). Now, from (54), we have
42 Chapter 1
T
0
=

T
0
. The problem then is to nd non-singular T in (F) such that

T = T. Given T in (F), T
1
=
1
2
(T

T) always satises

T
1
= T
1
. But
if we can ensure that T
1
is also non-singular, then we will be through.
Let now T F be of the form (T
kl
)(1 k, l q) with T
kl
being
hs
2
-rowed rational square matrices. Then, for every D (V ), we have
DT
kl
= T
kl
D i.e. T
kl
belongs to (V )

, the commutator-algebra of (V ). If

T should be equal to T, then we must have, in particular,



T
kk
= T
kk
for 1 k q. If we can nd nonsingular T
11
in (V )

with

T
11
= T
11
,
then T = [T
11
, . . . , T
11
] will meet our requirements. Now if (V )

is 59
not commutative, there always exists at least one T
2
0 (and hence
non-singular) for which

T
2
T
2
and then we can take T
11
= T
2


T
2
( 0, since (V )

is a division-algebra). (If, for every T


3
(V )

we
have

T
3
= T
3
, then for any two elements T
4
, T
5
(V )

, we would have
T
4
T
5
=

T
4

T
5
=

T
5
T
4
= T
5
T
4
)
Taking a basis
1
, . . . ,
n
of V over Q, for any V , we have two
representations,
D where
_

1
.
.
.

n
_

_
= D
_

1
.
.
.

n
_

_
B where (
1
, . . .
n
) = (
1
, . . . ,
n
)B
and further B = C
1
DC for a xed rational nonsingular matrix C. The
matrices B

give a regular representation of (V )

. Now B

= C

C
1
=
(C

G)D

(C

G)
1
. Hence the matrices D

for D give an equivalent


representation of (V )

. Denoting this equivalent representation itself by


(V )

, we see that (V ) and (V )

coincide as sets and their multiplicative


structure coincides on their centre (cf. Proposition 6). If the involution
in V is of the second kind, then there exists already in R, an element c
with c c.
If nally the involution is of the rst kind and further V = R, then
the commutator algebra of V is itself and if D is an irreducible rep-
resentation of R over Q, then, by (19)

, D = [
(1)
, . . .
(h)
]
1
. Taking
T
kl
= [
(1)
kl
, . . .
(h)
kl
]
1
with
kl
R(1 k, l q) for which the matrix
(
kl
) is non-singular and skew-symmetric, we have then that the matrix
A = (
q

)
1
(T
kl
) = (
1
[
(1)
kl
, . . . ,
(h)
kl
]
1
) is clearly rational, non- 60
7. Existence of R-matrices with given commutator-algebra 43
singular and skew-symmetric. A necessary and sucient condition for
such a non-singular skew-symmetric matrix (
kl
) over R to exist is that q
is even. It is easy to verify that dett. A = N
R/Q
(dett(
kl
))/(dett. )
2q
0
i.e. A is non-singular. If q = 2p, for example, we can choose (
kl
) =
_
0 E
p
E
p
0
_
, E
p
being the p-rowed identity matrix.
Having found a rational skew-symmetric matrix A satisfying (53),
we proceed to look for an R-matrix R having (M) for its commutator-
algebra. The following proposition prompts us to look for R in the linear
closure (F) with respect to the reals of the algebra (F).
Proposition 13. Any real matrix T for which T M = MT for all M ()
belongs to (F).
Proof. Writing T =
1
T
0
1
+ +
k
T
0
k
with T
0
1
, . . . , T
0
k
rational and

1
, . . . ,
k
being real numbers linearly independent over Q, we see from
T M = MT for M (M), that
k
_
p1

p
(T
0
p
M MT
0
p
) = 0. By the linear
independence of the
p
over Q, we obtain that
T
0
p
(F) for 1 p k i.e. T (F).
Denoting by (M) the linear closure of (M) with respect to the reals,
we deduce from Proposition 13 that (F) is precisely the set of all real
matrices commuting with all elements of (M).
Our object is then to nd R (F) such that
1) R
2
= E (E being the identity matrix)
2) AR = S is positive-denite symmetric, and
3) Any rational M for which MR = RM belongs to (M). 61
For the moment, we shall agree to ignore condition 3) and look for R
satisfying only conditions 1) and 2).
A necessary condition for R to exist is that the involution M
M

= A
1
M

A in (M) is positive. In particular, (V ) should admit a


positive involution
D D

= G
1
D

G, where G = G

> 0 (55)
and hence V has to be one of the following four types:
44 Chapter 1
i) V = R, a totally real algebraic number eld of degree h over Q.
ii) V = G, a totally indenite quaternion algebra over R of 1
st
kind
iii) V = F, totally denite quaternion algebra over R of 1
st
kind
iv) V is a cyclic algebra with a positive involution (55) of the second
kind, with centre R which is a totally imaginary quadratic exten-
sion of the xed eld L of the involution, L being totally real
and of degree g over Q. Further V has a splitting eld Z of degree
s 1 over R, with Z being realisable as indicated at the beginning
of 6.

For the construction of R, we shall deal with these four cases sepa-
rately. We shall rst nd a simple normal form for elements of (M) and
then, for elements of (F).
Case (i) V = R.
For R, we have the regular representation ( R) D =
_

(l)
k
_
[
(1)
,
. . . ,
(h)
]
_

(l)
k
_
1
with respect to a basis
1
, . . . , . . . ,
h
of R over Q. The
linear closure (V ) of (V ) with respect to the real number eld R consists
of all matrices of the form
_

(1)
k
_
[
1
, . . .
h
]
_

(l)
k
_
1
where
1
, . . . ,
h
are 62
arbitrary real numbers. Taking an R-equivalent representation for (V )
(i.e. a representation equivalent over the reals), we may suppose that
(M) consists of all real matrices of the form R = [
1
R
1
q
, . . . ,
1
R
h
q
] where
R
1
, . . . , R
h
are independent one-rowed real square matrices occurring
with multiplicity q. The commutator-algebra (F) of (M) consists ex-
actly of real matrices T = [
q
T
1
1
, . . . ,
q
T
h
1
] where T
1
, . . . , T
h
are arbitrary
q-rowed real square matrices occurring with multiplicity 1. In passing
to the new representation D = [
1
, . . . ,
h
] of (V ), the positive
symmetric matrix G in (55) goes over into the h-rowed indentity matrix
E
h
. The positive involution in (M) is just R (R)

= E
h
q
(R)

E
h
q
= R

and the induced involution in (F) is just T T

.
7. Existence of R-matrices with given commutator-algebra 45
Case (ii) V = G
Any element G is of the form x + yi + z j + tk where x, y, z,
t R, i
2
= a > 0, j
2
= b, b > 0, a, b R, = x + yi, = x yi,
= z + ti, = z ti. For G, we have the representation over R given
by = + j D where D =
__
1 1
i i
_
E
2
_
[D
1
, D
1
]
__
1 1
i i
_
E
2
_
1
,
D
1
=
_

b
_
, D
1
=
_

b
_
=
_
0 1
b 0
_
D
1
_
0 1
b 0
_
1
. Further G has a rational
representation K[D
(1)
, . . . , D
(h)
]K
1
, K being a certain xed ma-
trix. Going over to an R-equivalent representation, we see that (M) con-
sists of all real matrices of the form D = [
2
R
1
2q
, . . . ,
2
R
h
2q
] where R
1
, . . . , R
h
are arbitrary 2-rowed real square matrices occurring with multiplicity
2q. Any real matrix commuting with all real matrices of the form R
2q
63
where R is a real 2-rowed square matrix, is of the form (T
kl
) (1 k, l
2q) with T
kl
= t
kl
E
2
, t
kl
R. Thus (F
1
) consists of all the matrices
of the form T = [
2q
T
1
E
2
, . . . ,
2q
T
h
E
2
] where T
1
, . . . , T
h
are arbitrary
2q-rowed real square matrices. The positive involution in G is given
by D
1
D

1
= G
1
1
D

1
G
1
where G
1
is symmetric and totally-positive
over R. This involution goes over in (M) to the involution D D

=
_

_
G
(1)
1
1
2q
2
R

1
2q
G
(1)
1
2q
, . . . , G
(h)
1
1
2q
2
R

h
2q
G
(h)
1
2q
_

_
. For each G
(k)
1
(l k h), there ex-
ists a real non-singular matrix C
k
such that G
(k)
1
2
= C

k
C
k
. Taking for
(M), the equivalent representation D =
_

_
C
1
2q
R
1
2q
C
1
1
2q
, . . . , C
h
2q
2
R
h
2q
C
1
h
2q
_

_
,
we see that (M) still consists of the same set of matrices as above but,
in terms of the new representation, the given positive involution is more
simply expressed by D D

and the induced involution in (F) is just


T

T = T

.
Case (iii) V = R
46 Chapter 1
For = x + yi + z j + tk V , we have the 4-rowed representation
over R, viz. D =
_

_
x y z t
ay x at z
bz bt x y
abt bz ay x
_

_
and a rational
representation given by K
1
[D
(1)
, . . . , D
(h)
]K
1
1
with a constant ma-
trix K
1
. It is easy to see after passing to an equivalent representation that
(M) consists precisely of all matrices of the form D = [D
1
q
, . . . , D
h
q
],
where D
1
, . . . D
h
are matrices of the same form as D above, except that 64
now x, y, z, t are arbitrary real numbers. Let
C
k
=
_
1,
_
a
(k)
,
_
b
(k)
,
_
a
(k)
b
(k)
_
and C = [C
1
, . . . , C
h
].
Taking C
1
DC instead of D and replacing x, y, z, t by x,
y

a
,
z

b
,
t

ab
respectively, we obtain nally that (M) consists of all real ma-
trices of the form D = [
1
H
1
q
, . . . ,
1
H
h
q
] where H
1
, . . . , H
h
are independent
4-rowed real representations of Hamiltonian quaternions, each occur-
ring with multiplicity q. Let K denote the algebra of real Hamiltonian
and (K) denote the algebra of 4-rowed real matrices
H =
_

_
x y z t
y x t z
z t x y
t z y x
_

_
(with real x, y, z, t) representing elements of K. Then the matrices

H =
_

_
x y z t
y x t z
z t x y
t z y x
_

_
7. Existence of R-matrices with given commutator-algebra 47
(with x, y, z, t real) give a representation of

K, the opposite algebra of
K. We denote by (K) the set of such matrices

H.
The involution in Rwas, to start with, given by D F
1
D

F where
F
1
= [1, a, b, ab] and in terms of the new representation, F is to be
replaced by the identity. Thus the positive involution in (M) is given by
D D

. The commutator-algebra (F) consists of all matrices T of the


form T = [
q

H
1
1
, . . .
q

H
h
1
] where, for 1 k h,
q

H
k
is an arbitrary q-rowed
square matrix with elements which belong to (K) and the involution in
(F) is just T T

.
Case (iv) V , a cyclic algebra with a positive involution of the second 65
kind.
For V , we have the regular representation D over Z
given by (35) and the rational representation + D (see p. 37). We
arrange the conjugates of R over Q as R = R
(1)
, R
(2)
= R
(1)
, . . . , R
(h1)
,
R
(h)
= R
(h1)
. Using (51) and passing to an equivalent representation
over the eld C of complex numbers, we see that the linear closure
(V ) of (V ) (with respect to the reals) consists of all complex matri-
ces M of the form M = [
s
D
1
s
, D
1
s
,
s
D
3
s
, D
3
s
, . . . ,
s
D
h1
s
, D
h1
s
] where D
1
,
D
3
, . . . , D
h1
are g independent s-rowed complex square matrices oc-
curring with multiplicity s. The positive involution D D

= G
1
D

G
in (V ) corresponds exactly to a positive involution M P
1
M

P in
(V ) where P = [
s
G
1
,
s
G
2
, . . .
s
G
sh
] is positive-denite hermitian. Now, for
a complex non-singular L
k
, we have G
k
= L

k
L
k
for 1 k hs. Let
L = [L
1
, L
2
, . . . , L
sh
]. Taking the representation LML
1
instead of M,
the given involution in (V ) is expressed simply by M M

. Now,
every complex matrix
_
0
0
_
with = +

1(, real) is equiva-


lent over C to
_


_
. Thus passing to a suitable equivalent represen-
tation we obtain that (M) consists precisely of all matrices of the form
D = [
s
C
1
sq
, . . . ,
s
C
g
sq
] where C
1
, . . . , C
g
are independent s-rowed square ma-
trices with elements which are 2-rowed real representations of complex
numbers, each C
i
occurring with multiplicity sq. The positive involu-
48 Chapter 1
tion in (M) is just D D

. The commutator-algebra (F) consists of


all matrices T = [
sq
T
1
E
s
, . . . ,
sq
T
g
E
s
] where T
1
, . . . , T
g
are independent 66
sq-rowed square matrices with elements which are 2-rowed real repre-
sentations of complex numbers. The involution T

T in (F) induced
by the positive involution in (M) is just T T

. We have thus proved


Theorem 5. With the notation as above, we have the following normal
forms for elements of (M) and (F), viz.
Case (i) V = R D = [
1
R
1
q
,
(M)
. . . ,
1
R
h
q
] T = [
q
T
1
1
,
(F)
. . . ,
q
T
h
1
]
Case (ii) V = G D = [
2
R
1
2q
, . . . ,
2
R
h
2q
] T = [
2q
T
1
E
2
, . . . ,
2q
T
h
E
2
]
Case (iii) V = R D = [
1
H
1
q
, . . . ,
1
H
h
q
] T = [
q

H
1
1
, . . . ,
q

H
h
1
]
Case (iv) V , cyclic algebra D = [
s
C
1
sq
, . . . ,
s
C
g
sq
] T = [
sq
C
1
E
s
, . . . ,
sq
C
g
E
s
]
In all the four cases, the given positive involution in (M) is given by
D D

and the involution in (F) is T T

.
At the beginning of this section, we looked for a rational matrix
A = G
q
T
0
with

T
0
= T
0
in (F). With the simplication carried out
above in (F), we shall reduce A = T
0
to a simple normal form by
making the real linear transformations in (F). For reducing A to the
simplest form, we deal with each one of the four cases separately. We
denote, in the sequel, the matrix
_
0 E
k
E
k
0
_
by
k
(E
k
being the k-rowed
identity) and shall denote
1
by , for brevity.
Case (i) V = R. We have seen that a necessary and sucient condi-
tion for such an A to exist is that q is even, say, q = 2p. Let A =
[
q
T
1
, . . . ,
q
T
h
] where T
1
, . . . , T
h
are arbitrary 2p-rowed real nonsingular 67
skew-symmetric matrices. By passing to an equivalent representation of
(F) (which does not disturb the form of the elements of (F), we can
suppose that A =
p
h
already.
7. Existence of R-matrices with given commutator-algebra 49
Case (ii) V = G. As in case (i), passing to an equivalent representation
of (F) which does not destroy the form of the elements of (F), we
could suppose that A =
q
2h
.
Case (iii) V = P. For the sake of simplication, we might, to start
with, use for the Hamiltonian quaternions, the representation, as ele-
ments of the opposite algebra (

K). Thus, the elements of (F) are ex-


actly all matrices of the form T = [
q
H
1
1
, . . . ,
q
H
h
1
] where H
1
, . . . , H
h
are
q-rowed square matrices with elements in (K). We now make a sim-
ple transformation in (F) as follows (Of course, we have to make a
corresponding transformation also in (M), in order that (F) might con-
tinue to be the commutator-algebra of (M), but, for the moment, we
can aord to forget (M)). If H (K) corresponds to the Hamiltonian
quaternion x + yi + z j + tk = + j (where = x + yi, = z + ti are
in C and x, y, z, t R), then H is nothing but
_


_
where , , ,
are just the two-rowed real representations of the corresponding com-
plex numbers. Passing to an equivalent representation for (F) with a
suitable permutation matrix, we can suppose that the elements of (F)
are of the form
T =
__
C
1,1
C
1,2
C
1,2
C
1,1
_
, . . . ,
_
C
h,1
C
h,2
C
h,2
C
h,1
__
where C
k,l
(1 k h, l = 1, 2) are independent q-rowed square matrices 68
with elements which are of the form
_
x y
y x
_
with x, y R. Further
C
k,l
is obtained from C
k,l
by just replacing a general element
_
x y
y x
_
in
C
k,l
by
_
x y
y x
_
. Let us consider each one of the h blocks
_
C
k,1
C
k,2
C
k,2
C
k,1
_
in T, separately. By applying a suitable permutation-transformation to
C
k,l
which brings all the elements x together, all the y together, all the
elements x together and all the elements y together, we could suppose
that C
k,l
=
_
U
k,l
V
k,l
V
k,l
U
k,l
_
where U
k,l
and V
k,l
are independent q-rowed real
square matrices then C
k,l
=
_
U
k,l
V
k,l
V
k,l
U
k,l
_
. To start with A is an element of
(F) satisfying A = A

. By means of a transformation which does not


disturb the nal form of the elements of (F), we can suppose A =
2q
h
already.
50 Chapter 1
Case (iv) V , a cyclic algebra with a positive involution of the second
kind.
Let T
0
= [
sq
T
1
s
, . . . ,
sq
T
g
s
] be a non-singular skew-symmetric matrix in
(F). Now
sq
commutes with T
k
and M
k
=
sq
T
k
(1 k g) considered as
a sq-rowed complex matrix is hermitian and non-singular. There exists
a sq-rowed complex non-singular matrix L
k
such that L

k
M
k
L
k
= [1
a
k
, 1
b
k
]
with a
k
+ b
k
= sq. Let L = [
sq
L
1
s
, . . . ,
sq
L
g
s
] where, now in L
i
, we have re-
placed the complex elements by their 2-rowed real representations. Tak-
ing the equivalent representation L(F)L
1
instead of (F), we see that 69
the elements of (F) are again of the same form as above but the matrix
T
0
assumes the very simple form
_
[
a
1
,
b
1
], . . . , [
a
h
,
b
h
]
_
. We make now
a simple transformation on (F). The elements of (F) are of the form
[
sq
T
1
s
, . . . ,
sq
T
g
s
] where each T
i
is a sq-rowed matrix with elements of the
form
_
x y
y x
_
, x, y R. Passing to an equivalent representation of (F),
by clubbing all the xs together and all the ys together as in case (iii),
we may suppose that each T
k
=
_
U
k
V
k
V
k
U
k
_
where U
k
, V
k
are arbitrary sq-
rowed real square matrices. Thus T
0
goes over into [T
0
1
s
, . . . , T
0
g
s
] where
T
0
k
=
_
0 P
k
P
k
0
_
(1 k g) and P
k
= [1
a
k
, 1
b
k
] with a
k
+ b
k
= sq.
As a further simplication, we take the representation B(F)B
1
where
B = [B
1
s
, . . . , B
g
s
] and B
k
= [1
sq
, P
k
](1 k g). Thus (F) may be
supposed to be the set of all matrices of the form [
sq

C
1
s
, . . . ,
sq

C
g
s
] where

C
k
=
_
U
k
V
k
P
k
P
k
V
k
P
k
U
k
P
k
_
(1 k g) and U
k
, V
k
are arbitrary sq-rowed real
square matrices. Our given matrix T
0
goes over into the simple matrix
[
sq
s
, . . . ,
sq
s
].
Summing up, the elements of (F) have the normal form given in
the following table and in each of the four cases, the given matrix T
0
in
7. Existence of R-matrices with given commutator-algebra 51
(F) assumes the simple form J.
(F) J
0
J
V = R T = [
2p
R
1
1
, . . . ,
2p
R
h
1
]
p
J
0
h
V = G T = [
2q
R
1
2
, . . . ,
2q
R
h
2
]
q
J
0
2h
V = R T = [
q
H
1
1
, . . . ,
q
H
h
1
] where H
k
is of the form
2q
J
0
h
_
C
1
C
2
C
2
C
1
_
, C
1
=
_
U V
V U
_
and C
1
=
_
U V
V U
_
V , cyclic
algebra
T = [
sq

C
1
s
, . . . ,
sq

C
g
s
] where

C
k
is of the form
_
U
k
V
k
P
k
P
k
V
k
P
k
U
k
P
k
_
, and P
k
= [1
a
k
, 1
b
k
] with
sq
J
0
sq
a
k
+ b
k
= sq
(56)
70
We have to nd R (F) such that R
2
= E and JR = S = S

> 0.
Since R and J are both in (F), they decompose into similar blocks and
therefore conning ourselves to one of the components at a time, our
problem reduces to nding all real matrices R satisfying
J
0
R = S = S

> 0, R
2
= E (57)
and further R is of the form
2p
R
1
,
2q
R
1
,
q
H
1
or
sq

C
1
as in (56). We shall call
a real matrix R of the form
2p
R
1
,
2q
R
1
,
q
H
1
or
sq

C
1
as in (56), an admissible
matrix of type 1, 2, 3 or 4 respectively.
From (56), we get J
1
0
S J
1
0
S = E, S = S

> 0. Since J
2
0
= E, we
have
S J
0
S = J
0
, S = S

> 0. (58)
Thus we have to look for all admissible positive symmetric symplectic
matrices S .
Let us now analyse (58). First note that E + S is positive symmetric 71
along with S . Let us set W = 2(E + S )
1
; then W is positive symmetric
too and further S = E + 2W
1
. From (58), we get
4W
1
J
0
W
1
2W
1
J
0
2J
0
W
1
= 0, i.e. 2J
0
= J
0
W + WJ
0
.
52 Chapter 1
Setting J
0
J
0
W = F, this means that F = F

. Further F is admissible,
of the same type as J
0
and W. Let us write
F =
_
G H
H

K
_
with G = G

, K = K

G and K having the same number of rows. Now W = E J


0
F, W = W

together give J
0
F = (J
0
F)

. But J
0
F =
_
H

K
G H
_
. Thus H = H

and
K = G. Now S = E + 2(E J
0
F)
1
= (E + J
0
F)(E J
0
F)
1
. Thus
R = J
1
0
S = +J
1
0
(E + P)(E P)
1
(59)
where
P =
_
H G
G H
_
, H = H

, G = G

. (60)
and P is admissible, of one of the four types. (The parametrization of S
is quite similar to the Cayley parametric representation for orthogonal
matrices).
We now proceed to examine the nature of the set of all admissible R
satisfying R
2
= E and J
0
R = (J
0
R)

> 0, distinguishing between the


various types. For this purpose, we go back to the Riemann matrices
associated with the R-matrix R. From 1, we know that we can nd
a Riemann matrix P uniquely up to a left-sided complex non-singular
matrix factor such that
R =
_
P
P
_
1
_
iE 0
0 iE
_ _
P
P
_
, PJ
1
0
P

= 0, iPJ
1
0
P

> 0. (61)
(Here i =

1). If P = (AB) with square matrices A and B, then we 72


know that A, B are both non-singular and hence, we can assume without
loss of generality, that P = (Z E) and the last two conditions in (61)
are, in terms of Z, just
Z = X + iY, X = X

, Y = Y

, Y > 0. (62)
From (59), (60) and the rst condition in (61) we obtain
_
P
P
_
J
1
0
(E + P) =
_
iE 0
0 iE
_ _
P
P
_
(E P)
7. Existence of R-matrices with given commutator-algebra 53
i.e.
iZ + iZH iG = ZG + E + H
iZG iE iH = Z + ZH G
Z(iE + iH G) = E + H + iG
Z(iG + E H) = G iE iH
(63)
Let us set Z
0
= H + iG. Then solving for Z
0
from the third equation
in (63), we have E + Z
0
= Z(iE + iZ
0
), i.e. (E iZ)Z
0
= (E + iZ).
(equivalently, Z = i(E + Z
0
)(E Z
0
)
1
)
Z
0
= Z

0
= (E iZ)
1
(E + iZ)
= (E + iZ)(E iZ)
1
(64)
The condition S > 0 is equivalent to Y > 0 and using (63), this is
equivalent to
E Z
0
Z
0
> 0. (65)
The mapping Z Z
0
takes the generalized upper half-plane of degree
n consisting of all n-rowed complex Z satisfying (62) into the gener-
alized unit circle consisting of all n-rowed Z
0
= Z

0
satisfying (65). 73
Thus R is an admissible matrix of the form
R = J
1
0
_
E + H G
G E H
_ _
E H G
G E + H
_
1
(66)
where Z
0
= H + iG satises
Z
0
= Z

0
, E Z
0
Z
0
> 0 (67)
In case V = R or V = G, any q-rowed (respectively 2q-rowed)
real square matrix is admissible and therefore, from (60), G, H can be
arbitrary real square matrices of
q
2
and q rows respectively. Thus Z
0
=
H + iG is an arbitrary point of the generalized unit circle of degree
q
2
in
case V = R and of degree q, in case V = G. The matrix Z is then an
arbitrary point of the generalized upper half-plane of the corresponding
54 Chapter 1
degree. Taking into account all the components in the representation
(56) of (F), we are led in the case V = R, to a h-fold product of the
generalized upper half-plane of degree
q
2
which is a complex space of
complex dimension
h
2
_
q
2
+ 1
_
q
2
. In the case V = G we arrive at the
h-fold product of the generalized upper half-plane of degree q, which is
of complex dimension
h
2
q(q + 1).
Let us take the case V = R. From the form (60) of P and from the
admissibility of P, we see that H = H

= H, G = G

= G and
both G and H have to be of the form
_
U V
V U
_
with U, V being arbitrary
q-rowed real square matrices. From H

= H, G

= G, we obtain H =
_
0 X
1
X

1
0
_
, G =
_
0 Y
1
Y

1
0
_
with X
1
= X

1
and Y
1
= Y

1
. Now Z
0
=
_
0 Z
1
Z

1
0
_
74
with Z
1
= X
1
+ iY
1
= Z

1
. Condition (65) is equivalent to the condition
EZ
1
Z

1
> 0. We are thus led, in this case, to the set of q-rowed complex
square matrices Z
1
satisfying
Z

1
= Z
1
, E
q
Z
1
Z

1
> 0 (68)
This space, like the generalized unit circle met before in earlier cases,
is again one of the complex symmetric spaces of E. Cartan. It is of com-
plex dimension q(q 1)/2. If we take into account all the components
in the representation (56) of (F), we are led to a h-fold product of the
symmetric domain dened by (68). We remark that there is no special
advantage in interpreting (68) in terms of Z and in fact, it becomes more
complicated.
We now take up case (iv). Since P is admissible of type 4, it follows
that
H = DHD, G = DGD where D = [1
a
, 1
b
] with a + b = sq.
Breaking up H as
_
H
1
H
2
H
3
H
4
_
with a-rowed square H
1
, we see that H =
H

= DHD implies H
4
= 0, H
1
= 0, H
2
= H

3
. Thus H =
_
0 X
2
X

2
0
_
and
similarly G =
_
0 Y
2
Y

2
0
_
with arbitrary real X
2
, Y
2
of a rows and b columns.
Now Z
0
= H + iG =
_
0 Z
2
Z

2
0
_
with Z
2
= X
2
+ iY
2
having a rows and b
7. Existence of R-matrices with given commutator-algebra 55
columns. Condition (65) is equivalent to the two conditions,
E
a
Z
2
Z

2
> 0, E
b
Z

2
Z
2
> 0 (69)
Now E
a
Z
2
Z

2
> 0 is equivalent to the fact that the matrix 75
M =
_
E
a
Z
2
Z

2
E
b
_
=
_
E
a
Z
2
0 E
b
_ _
E
a
Z
2
Z

2
0
0 E
b
_ _
E
a
0
Z

2
E
b
_
is positive-hermitian. But M > 0 if and only if M > 0. On the other
hand
M =
_
E
a
0
Z

2
E
b
_ _
E
a
0
0 E
b
Z

2
Z
2
_ _
E
a
Z
2
0 E
b
_
is positive-hermitian if and only if E
b
Z

2
Z
2
> 0. Thus the two condi-
tions in (69) reduce to the single condition
E
a
Z
2
Z

2
> 0 (69)

The set of complex rectangular matrices Z


2
of a rows and b columns
satisfying (69)

is again a bounded symmetric domain of E. Cartan, of


complex dimension ab (Let us recall that a + b = sq). As before, if we
take into account all the components in the representation (56) of (F),
we are led to a g-fold product of the domain dened by (69)

, which is
of complex dimension
g
_
k=1
a
k
b
k
.
It is remarkable that in none of the four cases discussed above, we
arrive at the fourth type of E. Cartans symmetric domains.
The results above may be formulated as
Theorem 6. Any R-matrix in (F) satisfying the conditions R
2
= E,
(JR) = (JR)

> 0 is of the form (56) where the component matrices are


of the form (59) with Z
0
= H + iG belonging to one of the three types
of E. Cartans bounded symmetric domains mentioned above, the type
being determined by (F).
In each one of the four cases above, the set of such R (F) is non-
empty; for example, R = J is always in this set, since J
2
= E
hs
2 is 76
56 Chapter 1
positive symmetric. In case (iv), if a
k
b
k
= 0 for all k, then R = J is the
only R-matrix occurring in (F) with the normal form (56).
Let us dene = h in case V = R, G or P and = g in case V
is a cyclic algebra carrying an involution of the second kind. From the
form (56) of elements of (F), R = [R
1
, . . . , R

] where each component


R
k
(1 k ) occurs with multiplicity equal to 1 in cases (i) and
(iii) and equal to 2 or s in cases (ii) and (iv) respectively. We recall that
to each R
k
(1 k ) corresponds a Riemann matrix P
k
under the
correspondence R
k
=
_
P
k
P
k
_
1
L
0
_
P
k
P
k
_
where L
0
= [iE
v
, iE
v
], 2v being
the numbers of rows of R
k
. Now hs
2
q
is an even integer, say 2n, in all the
four cases. Let us denote by L, the matrix [iE
n
, iE
n
]. Then to the R-
matrix R in (F) corresponds the n-rowed Riemann matrix P by means
of the relation R =
_
P
P
_
1
L
_
P
P
_
. For a suitable permutation matrix V, we
have V
1
LV = L
0

. From this, it is immediate that


P =
_

_
E

P
1
0 . . . 0
0 E

P
2
. . . 0
. . . . . . . . . . . . . . . . . .
0 . . . . . . E

_
(70)
is a Riemann matrix corresponding to R. Each P
k
is a Riemann matrix
of rows and 2 columns with =
q
2
, q, 2q, sq in cases (i), (ii), (iii) and
(iv) respectively. Further each P
k
is of the form
P
k
=
_
i
E

+ Z
k
E

Z
k
, E

_
(71)
where Z
k
= Z

k
and E

Z
k
Z

k
> 0. Let us denote by H, the set of P of 77
the form (70) with P
k
of the form (71), corresponding to all R-matrices
in (F). As we saw, H consists of at least of one point P of the form
(70) with all P
k
= (iE

, E

)(1 k ) and consists exactly of this


point when a
k
b
k
= 0(1 k ).
We may now return to the problem of nding R-matrices R in (F)
admitting (M) as the exact algebra of commutators. For a given R-
matrix R in (F), let us denote by (R) the algebra of all real matrices
7. Existence of R-matrices with given commutator-algebra 57
M commuting with R. Then clearly, (R) (M). If R were to have
at least one rational commutator not in (M), then the rank of (R) over
R will be strictly greater than hs
2
which is the rank of (M) over R, or
what is the same as the rank of (M) over Q. Thus our problem is to nd
out R-matrices R in (F) for which the corresponding real commutator-
algebra (R) has rank exactly hs
2
over R. The advantage in introducing
(R) is the following. Taking rst the normal forms given in Theorem 5,
the algebra (F) is the commutator-algebra of (M). Let C be a real non-
singular matrix such that the elements of C
1
(F)C have precisely the
normal form given by (56). Then (F)
1
= C
1
(F)C is the commutator-
algebra of (M)
1
. But the rank over R of (M) and (M)
1
are the same.
Thus, in connection with the problem mentioned at the beginning of this
paragraph, we are free to look among the elements of (F)
1
itself, for
R-matrices for which the corresponding algebra of all real commutators
has rank exactly hs
2
over R. By a rational commutator of a R-matrix in
(F)
1
, we shall mean a commutator of the form C
1
MC with rational M.
The set of rational commutators is countable. We denote the algebra 78
C
1
(M)C by (M)
1
.
We know that the equation RM = MR for a R-matrix R corresponds,
in terms of the associated Riemann matrix P, to the equation
PM = KP (72)
where K is a complex matrix. Let then, for a P H, the equation (72)
hold, for a real M. Splitting up M and K as (M
kl
) and (K
kl
) correspond-
ing to the decomposition (70) of P, we obtain
(E

P
k
)M
kl
= K
kl
(E

) (73)
for l k, 1 . We break up M
kl
and K
kl
respectively into 2-rowed and
-rowed square matrices corresponding to the decomposition of E

P
k
and E

P
1
and denoting a typical block by M
0
and K
0
respectively,
we have, from (73)
P
k
M
0
= K
0
P
1
(74)
with complex K
0
. Now P
k
and P
1
are of the form (71); splitting up
58 Chapter 1
M
0
as
_
A B
C D
_
with -rowed square A, we have, from (74),
i
E

+ Z
k
E

Z
k
A + C = K
0
i
E

+ Z
1
E

Z
1
i
E

+ Z
k
E

Z
k
B + D = K
0
Elimination of K
0
leads to the equations (for 1 k, l )
i(E

+ Z
k
)A(E

Z
1
) + (E

Z
k
)C(E

Z
1
) + (E

+ Z
k
)B(E

+ Z
1
) i(E

Z
k
)D(E

+ Z
1
) = 0. (75)
Conditions (75) are necessary and sucient for P H to have M as a
multiplier. Referred to as the singular relations they have been studied
thoroughly by G. Humbert [10] for n = 2.
For any M (M)
1
, we know that equations (75) hold identically 79
for P H. If not P H admits a rational multiplier M
1
(i.e. if M
1
is a rational commutator of the associated R-matrix), then P neces-
sarily belongs to the quadratic surface dened in H by conditions (75)
corresponding to this M
1
. (Of course, if it turns out that every P H
admits this M
1
as a multiplier, then this quadratic surface coincides with
the whole of H). The number of such surfaces, for M
1
(M)
1
, is, at
any rate, countable. The complement of the union of these countably
many surfaces may be seen to be dense in H.
Let us suppose that for all P H, a real matrix M = (M
kl
) is a
multiplier. Then, with the same notation as above, conditions (75) are
valid with arbitrary Z
k
, Z
l
in the generalized unit disc of degree (such
that P H). In particular, taking Z
k
= 0 = Z
1
, P H and then (75)
gives iA+C + BiD = 0 i.e. A = D, B = C. Thus M
0
=
_
A B
B A
_
. Now
the quadratic terms in (75) cancel out and we are left with the equation
2iZ
k
A 2iAZ
1
+ 2Z
k
B + 2BZ
1
= 0
Setting F = A + iB, we have then
Z
k
F = FZ
1
(76)
We split our further considerations into four parts according as V = R,
G, P or a cyclic algebra. First, we take up the case V = R or G. Here
7. Existence of R-matrices with given commutator-algebra 59
= 1 or 2 respectively. Further, Z
k
, Z
l
are arbitrary elements of the
generalized unit disc of degree . Taking Z
k
= Z
l
= tE

(0 < t < 1)
in (76), we see that F = F or B = 0. Now Z
k
F = FZ
l
and since Z
k
, Z
l
80
are arbitrary elements of the generalized unit disc, we can show that
for k 1, we have A = 0 and for k = 1, A = E

with arbitrary in
R. Thus when V = H, we see that M
kl
= 0 for k 1 and M
kk
=
k
E
2
;
with arbitrary real . For V = G, again M
kl
= 0 for k 1, while
M
kk
=
_

k
E
2
k

k
E
2

k
E
2

k
E
2
_
with arbitrary
k
,
k
,
k
,
k
in R. Thus the rank
over R of the algebra of real matrices which are multipliers for every
P H is h or 4th according as V = R or G. But the rank of (M) over
R is the same in both these cases. Thus (75) does not hold identically for
P H, if M is a rational multiplier not in (M)
1
. In fact, if P does
not belong to the countably many quadratic surfaces in H corresponding
to such rational multipliers not in (M)
1
, then it has (M)
1
as its exact
algebra of multipliers. Thus our problem of nding a R-matrix with (M)
as exact commutator-algebra admits of a solution in these two cases.
Example. If q = 1 and V = Q, then any point P = (, 1) H (with
= +

d, , , d(< 0) in Q) admits all the elements of F = Q(

d) as
multipliers. Clearly F contains Qproperly. Such points are countably
many and constitute a dense set in the complex upper half-plane; the
complement of this set also is dense in the upper half-plane.
We now take up for consideration the cases (iii) and (iv). In these
two cases, = h or g, = 1 or s, = 2q or sq respectively. Further, for
1 k ,
P
k
=
_
i
E

+ Z
k
E

Z
k
, E

_
with Z
k
=
_
0 W
k
W

k
0
_
Further in case (iii) W
k
= W

k
is a q-rowed complex matrix satisfying 81
E
q
W
k
W

k
> 0 while, in case (iv), W
k
is a complex matrix of a
k
rows
and b
k
columns such that E
a
k
W
k
W

k
> 0 and a
k
+ b
k
= sq. If, in case
(iii), q = 1, then W
k
= 0; in case (iv), if a
k
b
k
= 0, then Z
k
is to be taken
just as the zero matrix of sq rows and columns.
We start from condition (76) and splitting up F as
_
F
1
F
2
F
3
F
4
_
with
square F
1
having the same number of rows as W
1
, we may rewrite (76)
60 Chapter 1
as follows, namely,
W
k
F
3
= F
2
W

1
W
k
F
4
= F
1
W
1
W

k
F
1
= F
4
W

1
W

k
F
2
= F
3
W
1
(76)

Let now V = P and q 3. (The situation when V = P and q = 1


or 2 is more complicated and will be dealt with later on). We consider
rst, for k 1, the equation W
k
F
3
= F
2
W

1
in (76)

. Let W
k
= (u

),
F
3
= (a

), F
2
= (b

), W
1
= (v

). Comparing the (, )
th
element on
both sides of the matrix equation, we have
q

=1
u

=
q

=1
b

(77)
Here, except for the relations u

= 0, v

= u

and similar relations


for the elements of W
1
, we may regard the elements v

of W
k
and the 82
elements v

of W
1
as independent variables. As a consequence, it can
be shown that F
2
= 0, F
3
= 0, for k 1. Similarly using the equation
W
k
F
4
= F
1
W

1
in (76)

, it can be proved that F


1
= 0, F
4
= 0 for k 1.
Thus, for k 1, the matrix F occurring in (76) is 0. We may now take
up the discussion of (76) for k = 1. Then we have, in particular, from
(76)

W
k
F
3
= F
2
W

k
(78)
W
k
F
4
= F
1
W

k
(79)
Using the same notation as in (77), we obtain from (78) that
q

=1
u

=
q

=1
b

(80)
We now proceed to show that, for , a

= 0. Since = q 3, there
exists , such that u

0 and then with = , (80) becomes


q

=1
u

=1
b

= 0 (81)
7. Existence of R-matrices with given commutator-algebra 61
But the left hand side of (81) is a linear form in the variables u

with
the coecient of u

equal to a

. Hence a

= 0. Similarly, we can
show that b

= 0 for . Thus F
2
, F
3
are diagonal matrices; using
(79), it would follow again that F
1
, F
4
are also diagonal. Now, from
(81),
u

_
a

+ b

_
+
q

=1

=1

= 0
By the same arguments as above, we have
a

= b

(82)
Analogous to (81), we have
q
_
=1
u


q
_
=1
b

= 0. From this we 83
may deduce as above that
a

= b

(83)
From (82) and (83), it follows that F
3
= a
11
E
q
and now from (78) we
deduce that F
2
= F
3
. In a similar manner, we can use (79) to show that
F
4
= F
1
= xE
q
where x is a complex number. Thus nally we see that,
for k = 1, the matrix F occurring in (76) is of the form
_
x y
y x
_
E
q
where
x, y are arbitrary complex. Referring to (72), if M is a real matrix which
is a multiplier for all P H, then M = [M
11
, . . . , M
kk
, . . . , M
hh
] where
each M
kk
is a real matrix with 4 independent real parameters. Thus the
rank over R of the algebra of all real matrices commuting with all the
R-matrices in (F) is 4h, which is precisely the rank over R of (M).
We may conclude, as before, that for V = P and q 3, there exist R-
matrices in (F) admitting (M) as the exact commutator-algebra.
We now take up case (iv) when V is a cyclic algebra with an involu-
tion of the second kind and assume further that qs 3 and not all a
k
b
k
are equal to zero. We may, without loss of generality, suppose that for
1 k r g, we have a
k
b
k
> 0. Observe that a
k
b
k
> 0, qs 3
together imply that at least one of a
k
, b
k
is greater than 1. We go back
to consider equation (76). If k > r and 1 r, then we have FZ
1
= 0 for
all Z
1
and consequently F = 0. Similarly, if 1 > r and k r, we have
62 Chapter 1
again F = 0 in (76). Let us now suppose that 1 k, 1 r. From (76)

,
we may deduce a relation analogous to (77). But now the elements of
W
k
are independent complex variables which are again independent of 84
the elements of W
1
(for 1 k). Further, at least one of a
k
, b
k
is greater
than or equal to 2 and similarly for a
1
, b
1
. It is easy to deduce as before
that for k 1, 1 k, 1 r, we have F = 0, while for 1 k = 1 r,
we see that F =
_
x 0
0 x
_
E

where x is arbitrary complex. Thus, refer-


ring to (73), M
kk
is a real matrix with 2s
2
real parameters. We may then
conclude that any real matrix M which is a multiplier for all Riemann
matrices P H is necessarily of the form
M = [M
11
, . . . , M
rr
, N] (84)
where M
11
, . . . , M
rr
are real matrices with 2s
2
independent real param-
eters and N is a 2(g r)s
2
q-rowed real square matrix.
If r = g, in other words, a
k
b
k
> 0 for 1 k g and qs 3,
then the rank over R of the algebra of all such matrices R is 2gs
2
= hs
2
which is precisely the rank over R of (M). Thus in the case when V is a
cyclic algebra with an involution of the second kind and further qs 3,
a
k
b
k
> 0(1 k g), we see that there exist R-matrices with (M) as the
complete commutator-algebra.
If 1 r < g, we know nothing about the nature of the matrix N
appearing in (84). Therefore, before we proceed to discuss the case
when qs 3 and 1 r < g, we need to prove
Lemma 3. Let L be an algebraic numbereld of degree g over Q, with

1
, . . . ,
g
as a basis over Q and let stand for the g-rowed square
matrix (
(1)
k
) (1 k, l h). Further, let Q be a g-rowed square matrix
of the form
_
a
0 B
_
with a complex number a and let Q
1
be rational.
Then, necessarily a Z and furthermore, Q = [a
(1)
, . . . , a
(g)
].
Proof. Let Q
1
= (p
kl
)(1 k, l g). Then Q = (p
kl
) and, in 85
particular, we have
k
a =
g
_
1=1
p
kl

1
(p
kl
Q). Hence a =
g
_
1=1
p
11

L (since
1
0) and a
(1)
= a, a
(2)
, . . . , a
(g)
are its conjugates over
Q. But we know that [a
(1)
, . . . , a
(g)
] = (p
kl
) and therefore Q =
[a
(1)
, . . . , a
(g)
].
7. Existence of R-matrices with given commutator-algebra 63
Remark. We shall use, in the sequel, a generalization of Lemma 3 (the
proof of which is exactly on the same lines) namely, the following.
Let d 1 be a rational integer and L, as in the hypothesis of
Lemma 3. Let Q =
_
H
Q
_
with a d-rowed complex square matrix H and
let ( E
d
)Q( E
d
)
1
be rational. Then necessarily, the elements of
H are in Z and further, Q = [H
(1)
, . . . , H
(g)
].
We may proceed now to discuss case (iv) with the assumption that
qs 3, 1 r < g. In order that the application of the above-mentioned
generalization of Lemma 3 be feasible, we do not go right up to the
eventual normal form (56) of (F) but we stop short somewhat earlier.
So let us start de novo. From the representation D
0
of V over R
given on p. 37, we rst get a representation D
1
of V over L as
follows; namely, if (1,

c) is a basis of R over Z, then denoting the


matrix
_
1 1

c
_
by
1
, we dene D
1
by D
1
= (
1
E
s
2 )[D
0
, D
0
](
1

E
s
2 )
1
. Now, we can get from this a rational representation D of V
by the prescription
D = C
1
_

_
D
(1)
1
q
, . . . , D
(g)
1
q
_

_
C
1
1
where C
1
=

1
E
2s
2
q
,

1
= (
(1)
k
),
1
, . . . ,
g
is a basis of L over Q 86
and D
(1)
1
, . . . , D
(g)
1
are the conjugates of D
1
over Q. Thus the elements of
the algebra (M)
2
= C
1
1
(M)C
1
are of the form [D
(1)
1
, . . . , D
(g)
1
] where
D
1
is a 2s
2
q-rowed square matrix with elements in L. Dening the
algebra (F)
2
by (F)
2
= C
1
1
(F)C
1
, we shall look for R-matrices in
(F)
2
with the required properties. Applying to (F)
2
the procedure
given earlier to reduce (F) to the normal form (56), we remark that
this merely involves going over to the representation C
1
2
(F)
2
C
2
(where
C
2
= [C
2,1
, . . . , C
2,g
] with 2s
2
q-rowed real square matrices C
2,k
). Let
M
0
be a rational matrix commuting with all R-matrices in (F). Then
M = C
1
2
C
1
1
M
0
C
1
C
2
commutes with all the corresponding R-matrices
in C
1
2
(F)
2
C
2
. Since r 1, it follows, by using the same arguments
as for the case r = g above, that M and hence M
2
= C
2
MC
1
2
is of
the form (84). As yet we know nothing about the number of parameters
involved in N. But now M
0
= C
1
M
2
C
1
1
is rational and appealing to our
64 Chapter 1
Remark on p. (85), we conclude that M
11
has elements in Z and further
M
2
itself is of the form
M
2
= [M
(1)
11
, . . . , M
(g)
11
] (85)
If we take M
0
to be real instead of being rational, and further if M
0
commutes with all the R-matrices in (F) we see, by arguments as in
the case r = g, that C
1
M
0
C
1
1
is of the form (84) again, with each
M
kk
(1 k r) having 2s
2
independent real parameters. Now such
M
11
constitute precisely the linear closure of the matrices M
(1)
11
occur-
ring in (85) with elements in Z. Thus the matrices M
(1)
11
in (85) form 87
an algebra of rank 2s
2
over Z. Let indeed then F
1
, . . . , F
2s
2 be a ba-
sis of this algebra so that every such M
(1)
11
=
2s
2
_
k=1
x
k
F
k
(with x
k
Z).
Hence, in (85), M
(1)
11
=
2s
2
_
k=1
x
(1)
k
F
(1)
k
. This enables us to conclude that
if M
0
is a real commutator of all R-matrices in (F), then, by virtue of
M
0
lying in the linear closure of rational commutators of the same kind,
C
1
M
0
C
1
1
= [M
11
, . . . , M
gg
] where M
kk
=
2s
2
_
l=1
x
l
, F
(k)
l
and x
1
, . . . , x
2s
2
are arbitrary real numbers. In other words, the rank of the algebra of
real commutators of all R-matrices in (F) is 2gs
2
= hs
2
. Thus for
qs 3, 1 r g, there do exist R-matrices with (M) as the complete
commutator algebra.
We shall now prove that for qs 2 and r = 0 (i.e. a
k
b
k
= 0 for all
k), there cannot exist R-matrices with (M) as the complete commutator-
algebra. Since a
k
b
k
= 0 for 1 k g, the only R-matrix in (F)
1
=
C
1
(F)C (referring to the notation on p. 56) is J = [J
0
, . . . , J
0
g times
].
Now the matrices P
k
occurring in case (iv) in (56) are all E
sq
and
therefore all the matrices in (F)
1
commute with J. Thus all elements
of (F) commute with the R-matrix R = CJC
1
; in particular, every el-
ement of (F) commutes with R. If now R were to have (M) as its exact
commutator algebra, then (F) (M), necessarily. But, by Proposition
6, (F) (M) = (R). Hence (F) = (R). Now, if qs 2, then either
q > 1 or s > 1. If q > 1, then (F) is the q-rowed matrix-algebra over
the commutator algebra (V )

of (V ) and therefore it is not commutative. 88


7. Existence of R-matrices with given commutator-algebra 65
But then (F) = (R) gives a contradiction. Again, if q = 1 and s > 1,
then (V ) is non-commutative and so is (F) = (V )

, which contradicts
(F) = (R). Thus our assertion above is proved.
The exceptional cases which remain to be considered are the fol-
lowing, namely a) V = P, q = 1 or 2 and b) V , a cyclic algebra
with an involution of the second kind with qs = 1 or with qs = 2
and not all a
k
b
k
equal to zero. We shall slightly reformulate our prob-
lem of nding R-matrices in (F) for which 1) AR = S = S

> 0
where A = G
q
T
0
with T
0
=

T
0
in (F), 2) R
2
= E and 3) (M)
is the complete commutator-algebra. Let us set N = T
0
R. Then bar-
ring the last condition, in terms of N, these conditions are merely that
1) N (F) 2) N = G
1
q
N

G
q
= L
1
F
1
q
F
q
L = (F
1
q
N

F
q
)L
1
L =

N and
3) (T
1
0
N)
2
= E. On the other hand, by Lemma 2, S = G
q
N > 0 and
G
q
> 0 together imply that all the eigenvalues of N are real and positive.
Thus our problem reduces to nding N (F) for which
N =

N, T
1
0
N T
1
0
N = E; the eigenvalues of N are real and positive .
(86)
We shall rst take up the case when V = P, q = 1. Choosing for
(M), the 4-rowed representation of the opposite algebra P

of Pwith-
out loss of generality, we may suppose that (F) is the 4-rowed rep-
resentation D of P over R, given on p. 46. We observe that
the involution T

T in (F) is a positive involution since (T

T) =
(TF
1
q
T

F
q
) is a positive denite form over the centre R in view of
F being positive denite. But now we know that the abstract totally 89
denite quaternion algebra P has a unique positive involution, viz.
= x + yi + z j + tk

= x yi z j tk. Thus, for N (F), N =



N
implies that N is in the center of (F). This gives R = T
1
0
N = NT
1
0
i.e. T
0
R = N = RT
0
. Now suppose that there exists a R-matrix R in
(F) for which (M) is the exact commutator-algebra. Then T
0
being a
rational commutator of R, T
0
(M). Since T
0
(F) too, we have
T
0
(F) (M) = (R). This gives us T
0
=

T
0
but the we have a con-
tradiction to T
0
=

T
0
from which we started. We may thus conclude
that in the case V = P, q = 1, there cannot exist R-matrices with (M)
66 Chapter 1
as the exact commutator-algebra.
Next we consider the case when V = P and q = 2. By choosing
a suitable representation (P) of P, we can suppose, without loss of
generality, that for the elements of the commutator-algebra (

P) of
(P), we already have the representation [D
(1)
, . . . , D
(h)
] over the
centre R and its conjugates over Q, as indicated on p. 46. Now T
0
=
_

1

1

1

1
_
=

T
0
with
1
,
1
,
1
,
1
in (

P). We rst remark that there
exists a 2-rowed non-singular matrix W with elements in (

P) such that

WT
0
W =
_

2
0
0
2
_
with
2
=
2
,
2
=

2
in (

). If now for some p


in (R), we have
2
= p
2
, then we can easily nd 0 in (

P) such
that

2
does not commute with
2
. Therefore, choosing, for example,
W
_
1 0
0
_
instead of W, we could suppose that for no element p in (R) do
we have
2
= p
2
. The matrix N (F) has the properties mentioned
in (86). Now it is trivial to see that

WNW is again symmetric under the 90
involution in (F). Moreover, from (86), we have, in view of Lemma 2,
that F
2
N is symmetric and positive-denite. Hence W

F
2
NW = F
2

WNW
is again symmetric and positive-denite. By Lemma 2 again,

WNW
has its eigen-values real and positive. Thus taking W
1
RW,

WNW and

WT
0
W instead of R, N and T
0
respectively we could suppose from the
beginning that T
1
0
=
_
0
0
_
with = , =

in (

P) and N =
_
x
y
_
has the properties mentioned in (86). Denoting by (R) the centre of (

P),
we see that
x = [x
1
4
, . . . , x
h
4
] > 0, y = [y
1
4
, . . . , y
h
4
] > 0 are in (R)
= (

P), xy > 0.
(87)
(The last assertion in (87) is a consequence of the relation
N =
_
1 0
x
1
1
_ _
x 0
0 y x
1

_ _
1 x
1

0 1
_
.
Now R = T
1
0
N =
_


_
where = x, = , = and = y. The
condition R
2
= E may be written as
cx
2
+ = 1, cx = y
dy
2
= 1, dy =
(88)
7. Existence of R-matrices with given commutator-algebra 67
where
2
= = c,
2
=

= d with c = [c
1
4
, . . . , c
h
4
] > 0,
d = [d
1
4
, . . . , d
h
4
] > 0 in (R). Writing x = py, with p (R), we obtain
from (88),
c(x
2
p ) = 1 = d
_
y
2


p
_
leading to p
2
= dc
1
. Thus p = xy
1
is the positive square root of dc
1
; 91
i.e. p
k
=

_
d
(k)
c
(k)

. Our problem on R-matrices now reduces to nding


x > 0 in (R) and (

P) for which
= cp (89)
cx
2
cp = 1 (90)
p
1
x
2
> 0
As a particular solution of (89), we have
0
= p (observe that
0

0). The most general solution of (89) is given by = t
0
where t (

P)
and t = t. Clearly t = u +v with u = [u
1
4
, . . . , u
h
4
], v = [v
1
4
, . . . , v
h
4
] in
(R). Now, the rst condition in (90) may be written as
cx
2
cp
0

0
(u
2
+ cv
2
) = 1 (90)

Equation (90)

denes a two-sheeted hyperboloid in the x, u, v-space;


the 2h components of u and v are independent real parameters while the
h components of x are linearly independent of the components of u, v
although quadratically related to them. We nally arrive at the following
parameterization for the R-matrix R, namely
R =
_
py (u + v)
0

0
(u v) y
_
(91)
where cp
2
y
2
cp
0

0
(u
2
+ cv
2
) = 1.
Let
1
, . . . ,
h
be a basis of R over Q and let = (
(1)
k
) with 1
k, 1 h. For (

P), we took the rational representation (
E
4
)[D
(1)
, . . . , D
(h)
]( E
4
)
1
. Let, under this representation, (
68 Chapter 1
E
4
)[A
(1)
, . . . , A
(h)
](E
4
)
1
and (E
4
)[B
(1)
, . . . , B
(h)
](E
4
)
1
. 92
It is trivial to verify that R = ( E
8
)[R
1
, . . . , R
h
]( E
8
)
1
where
R
k
=
_
p
k
y
k
A
(k)
A
(k)
(u
k
E
4
+ v
k
A
(k)
)(p
k
A
(k)
B
(k)
)
B
(k)
(p
k
A
(k)
B
(k)
)(v
k
A
(k)
u
k
E
4
) y
k
B
(k)
_
for 1 k h. Let now M
0
be any 8h-rowed rational matrix commuting
will all R-matrices in (F). Then M = ( E
8
)
1
M
0
( E
8
) has to
commute with [R
1
, . . . , R
h
] and moreover ( E
8
)M( E
8
)
1
has to
be rational. From the mutual independence of the parameters u
k
, v
k
, y
k
and u
1
, v
1
, y
1
(for k 1), it is clear that M = [M
1
, . . . , M
h
] and further
by our Lemma, M
1
has elements in R while M
k
= M
(k)
1
for 1 k h. In
addition M
k
commutes with R
k
. We proceed to determine the structure
of M
1
, writing M
1
=
_


_
with 4-rowed square matrices with elements
in R. For the sake of brevity in notation, let us for the present, agree
to understand by , the corresponding matrices A
(1)
, B
(1)
and further
let us omit the subscript in p
1
, y
1
, u
1
, v
1
. Then we see that M
1
has to
commute necessarily with the matrix
_
py (u+v)
0

0
(vu) y
_
. Equivalently M
1
has to commute with
_
p 0
0
_
,
_
0
0

0
0
_
and
_
0 c
0

0
0
_
= c
_
0
0
p
0
0
_
.
The last matrix is the product of the rst two upto a scalar factor. Hence
it suces to require M
1
to commute with the two matrices
_
p 0
0
_
and
_
0
0

0
0
_
. We have now to distinguish between three cases.
(i) p
k
R
(k)
for some k, (say k = 1). Since 1, p are linearly in- 93
dependent over R, it is clear that M
1
has, of necessity, to com-
mute with
_
0
0 0
_
,
_
0 0
0
_
and
_
0
0

0
0
_
. It follows immediately that
M
1
=
_
0
0
_
where commutes with and and hence with all
elements of (

P). Thus is in (P). Since M
k
= M
(k)
1
(1 k h),
we see that the rank over Q of the algebra of all rational matrices
M
0
commuting with all R-matrices in (F) in this case is 4h which
is the same as the rank of (M) over Q. Thus, in this case, there
exist R-matrices admitting (M) as the exact commutator-algebra.
7. Existence of R-matrices with given commutator-algebra 69
(ii) p (R) but p
k
= |
(k)
|(1 k h) for R.
As in case (i), we know that M
1
has to commute with
_
p 0
0
_
and
_
0
0
0
_
. Further, for at least one k(1 k g), p
k
=
(k)
and for
some l(1 l g), p
l
=
(l)
. Thus, from the fact that M
k
R
k
=
R
k
M
k
and M
k
= M
(k)
l
, we see that M
l
has to commute with
_
p 0
0
_
,
_
p 0
0
_
,
_
0
0

0
0
_
and
_
0 (p )
(p + ) 0
_
.
Thus, again M
l
commutes with
_
0
0 0
_
,
_
0 0
0
_
,
_
0
d 0
_
and
_
0 c
0
_
and therefore M
l
has to be of the same form as in case (i) above.
We conclude as above that there exist R-matrices admitting (M)
as the exact algebra of commutators.
(iii) p (R). In this case M
l
commutes with P =
_
p 0
0
_
and Q = 94
_
0
0

0
0
_
, as before. But since P
2
= dE
8
, Q
2
= cp
0

0
E
8
,
QP = PQ, we see that E
8
, P, Q generate an abstract quaternion
algebra over R and this 8-rowed representation contains its irre-
ducible representation over R exactly twice. Since M
1
commutes
with P, Q it follows that the rank over R of the algebra formed by
the matrices M
l
is 16. Thus, in this case, the rank over Q of the
algebra of rational matrices commuting with all the R-matrices in
(

P) is 16h which is greater than 4h, the rank of (M) over Q. In
other words, there do not exist R-matrices in (

P) with (M) as the
exact algebra of commutators, in this case.
For (

P), dene the reduced norm N
R
() of over R by
N
R
() = (which certainly belongs to R) and for T
0
=
_

1
0
0
1
_
dene
the reduced norm N
R
(T
0
) of T
0
by N
R
(T
0
) = N
R
(
1

1
). It is then
clear that N
R
_

_
and N
R
(T
0
) are the same upto the square of a totally
positive number in R. We conclude thus, that in the case when V = P,
70 Chapter 1
q = 2, there exist R-matrices with
_
P
2
_
as exact commutator-algebra
except when
N
R
(T
0
) =
2
for > 0 in R.
The next case for discussion is when V is a cyclic algebra of type
(iv) with qs = 1 i.e. q = s = 1. Then V is the same as its centre
R(= Z(

a)) which is totally complex of degree 2 over a totally real


subeld Z of degree
h
2
over Q. Let
1
, . . . ,
h
be a basis of R over Q
and = (
(1)
k
)(1 k, l h). Then we have
T
0

1
= [
(1)
, . . . ,
(h)
] with
(k)
=
(k)
,
N
1
= [p
1
, . . . , p
h
] > 0
R
1
= [i
1
, . . . , i
h
]
and necessarily i
k
=

1 in view of the fact that R


2
= E
h
. Thus
(k)
95
and i
k
are purely imaginary and lie in opposite half-planes.
Let R be any R-matrix in (F) which is the same as (R) and let (L)
denote the algebra of rational commutators (Of course, by construction,
(L) contains (R)). We know (L) is semi-simple but since (L) is an alge-
bra of h-rowed rational matrices containing an irreducible representation
of R (of degree h over Q), we see by considering the characteristic poly-
nomial of a generator over Q of (R), that (L) is necessarily simple. Let
then (L) be the total matrix-algebra of order 1 over a division algebra
V
1
and let V
1
be a division algebra with centre g which is an algebraic
number eld of degree g over Q. By considering the representation of
a generator of R over Q with respect to a splitting eld of V
1
, we see
that V
1
= g. Now the degree of a maximal commutative system in (L)
is necessarily gl and it is easy to deduce that gl = h and (g) (R). Let
g
(1)
(= g), g
(2)
, . . . , g
(g)
be the conjugates of g over Q. Let R
(1)
(= R),
R
(2)
, . . . , R
(1)
be the conjugates of R over g
(1)
and R
(1+1)
, . . . , R
(21)
the
conjugates of R
(1+1)
over g
(2)
and so on. Taking a representation over
g
(1)
, . . . , g
(g)
, let T
0
= [T
(1)
, . . . , T
(g)
] and R = [R
1
, . . . , R
g
]. Since
M
1
((g)) is the complete rational commutator-algebra of R, it follows
that R
k
= iE
l
(for 1 k g). Thus i
(kl+1)
, . . . i
(kl+1)
have all the same 96
sign (for 1 k g). If L = (R), then we must have necessarily l = 1
7. Existence of R-matrices with given commutator-algebra 71
and g = h. The criterion for R to have (R) an exact commutator-algebra
is then clearly that l should not be greater than 1. We may reformu-
late this condition as follows, namely, that there should exist no proper
subeld g of R such that the conjugates of with respect to each con-
jugate g
(k)
of g should not all lie in the same complex half-plane (lower
or upper). In other words,

aT
0
should not be totally-denite over any
proper subeld g of R.
We proceed to discuss the case when V = R of type (iv) but q = 2.
Then R = Z(

a) with Z totally real and a > 0 in Z. For =


x +y

a in R with x, y Z, we always take the 2-rowed representation


_
x y
ay x
_
over Z and denote it by again. For =
_
x y
ay x
_
R, =
=
_
x y
ay x
_
. In particular, to

a corresponds =
_
0 1
a 0
_
. Any =
x +y

a R, is then equal to xE
2
+y. If Z
(k)
is a conjugate of Z over
Q, then corresponding to =
_
x y
ay x
_
in R, we dene
(k)
by
_
x
(k)
y
(k)
(ay)
(k)
x
(k)
_
.
Let
1
, . . . ,
g
be a basis of Z over Q and = (
(1)
k
)(1 k, l g).
For elements T of F, we have rst a representation T =
_


_
over Z,
where , , , are in R and a rational representation for T is given by
( E
4
)[T
(1)
, . . . , T
(g)
]( E
4
)
1
where T
(k)
=
_

(k)

(k)

(k)

(k)
_
.
We shall in the sequel, use sometimes for the elements T of F, the
representation (F) given by T [T
(1)
, . . . , T
(g)
] over Z and its con- 97
jugates as mentioned above. We then extend this representation linearly
to the linear closure (F) of (F). When there is no risk of confusion,
we shall denote T (F) merely by
_


_
as above without referring to
all the g components every time.
Let then T
0
=

T
0
be a nonsingular element of (F). By the same
arguments as in the case V = P, q = 2, we can suppose that T
1
0
=
_
0
0
_
with = , =

in R. Let N = [N
1
, . . . , N
g
] with N
k
=
_
x
k
z
k
z
k
y
k
_
be in (F), satisfying N =

N and having all eigenvalues real and
positive. It follows that x
k
, y
k
are positive real scalar multiples of E
2
,
while Z
k
= c
k
E
2
+ d
k
with c
k
, d
k
R and further x
k
y
k
z
k
z
k
> 0.
72 Chapter 1
Let now R = T
1
0
N be an R-matrix in (F). Then
R = ( E
4
)[R
1
, . . . , R
g
]( E
4
)
1
(92)
with R
k
=
_

k

k

k

k
_
and
k
= x
k

(k)
,
k
=
(k)
z
k
,
k
=
(k)
z
k
,
k
= y
k

(k)
.
Now R
2
= E gives, for 1 k g,

2
k
+
k

k
= 1,
2
k
+
k

k
= 1, (
k
+
k
)
k
= 0 = (
k
+
k
)
k
(93)
Now there are two possibilities, namely, either a)
k
+
k
0, in which
case we have necessarily
k
= 0 =
k
, or b)
k
+
k
= 0. In case a),
we see that R
k
=
_

k
0
0
k
_
, where, from (93),
k
=
1

a
(k)

(k)
,
k
= 98

a
(k)

(k)
. Since
k
+
k
0, it follows that
k
=
k
=
1

a
(k)

(k)
and thus
R
k
=
1

a
(k)
_

(k)
0
0
(k)
_
(94)
Now
k
=
k
is equivalent to the fact that

(k)

(k)
=
y
k
x
k
> 0, which, in turn,
is equivalent to the fact that a
k
b
k
= 0, in our former notation. Let us
now consider case b), when
k
+
k
= 0 or equivalently

(k)

(k)
=
y
k
x
k
< 0.
Thus, in this case, a
k
b
k
= 1. Now
k
=
k
and we have
R
k
=
_
x
k

(k)

(k)

(k)
z
k
x
k

(k)
_
(95)
From (93), we obtain (x
k

(k)
)
2
+
(k)

(k)
x
k
z
k
= 1, i.e.
(
(k)
)
2
x
2
k

(k)

(k)
z
k
z
k
= 1 (96)
Since
(k)
=
(k)
,
(k)
=
(k)
, it is clear that (
(k)
)
2
> 0 while

(k)

(k)
< 0. Thus equation (96) denes a two-sheeted hyperboloid in
the x
k
, z
k
-space. As a consequence of (96), we also have

(k)

(k)
x
2
k
z
k
z
k
=
1

(k)

(k)
> 0 which means x
k
y
k
z
k
z
k
> 0.
7. Existence of R-matrices with given commutator-algebra 73
We may rule out the possibility that when case a) could occur for all
the g components, since then a
k
b
k
= 0 for all k and this case has been
discussed already.
So then let us assume that at least one component of R, say R
1
,
without loss of generality is of the form (95). Let M be a rational matrix 99
commuting with all R-matrices in (F). Then using the form (92) for
R-matrices, M
1
= ( E
4
)
1
M( E
4
) commutes with
_
R
1
, . . . , R
g
_
.
We split up M
1
as (M
kl
)(1 k, l g) with 4-rowed square matrices M
kl
.
Now in R
1
, the three real parameters in x
1
, z
1
are linearly independent
and therefore M
21
= 0 = . . . = M
gl
. We are now in a position to
apply Lemma 3 and deduce that all the elements of M
11
are in Z while
M
kk
= M
(k)
11
and M
kl
= 0 for k 1. Further M
kk
R
k
= R
k
M
kk
.
Let us now suppose that not all of the components R
k
are of the form
(95) i.e. neither a
k
b
k
= 0 for all k nor a
k
b
k
= 1 for all k. Further, without
loss of generality, let R
1
be of the form (95) while R
2
is of the form
(94). Then writing M
22
=
_


_
with 2-rowed square matrices , , ,
having elements in Z
(2)
, we obtain each one of them commutes with

(2)
=
_
0 1
a
(2)
0
_
and therefore , , , represent elements in Z
(2)
(

a
(2)
).
Since M
kk
= M
(k)
11
, we know M
11
, M
22
, . . . , M
gg
are conjugate over Z
and hence the elements of M
kk
are in Z
(k)
(

a
(k)
) and in particular M
11
has elements in R.
From M
11
R
1
= R
1
M
11
for all R
1
of the form (95) it follows that M
11
has to commute with
_
0
0
_
,
_
0
0
_
and
_
0
0
_
(dropping the suxes
and superscripts). Let us now set
1
= p; p lies in Z, in fact. The
matrix M
11
which already commutes with
_
0
0
_
has also to commute
with
A =
_
0
0
_
, B =
_
0
p 0
_
and C =
_
0
2
+p
2
0
_
= AB = BA
(97)
Thus M
11
has to commute with
_
0
0 0
_
,
_
0 0
0
_
and
_
0
p 0
_
. Therefore 100
M
11
=
_
0
0
_
with R. Hence the rank over Q of the algebra of
rational matrices commuting with all R-matrices in (F) is 2 g = h
which is exactly the rank of M over Q. We conclude, as before, that in
this case, there exist R-matrices with (M) as exact commutator-algebra.
74 Chapter 1
On the other hand, let all R
k
be of the form (95). The M
11
is an
arbitrary 4-rowed square matrix with elements in Z and commuting
with A, B and C as dened in (97). But from (97) and from A
2
= aE,
B
2
= paE, we see that 1, A, B generate a quaternion algebra over
Z and M
11
has then to lie precisely in the commutator-algebra of .
Hence the rank of the algebra of all rational matrices commuting with
all the R-matrices in (F) is precisely 4g = 2h which is greater than the
rank of M over Q. Thus in the case when a
k
b
k
= 1 for all k, there exist
no R-matrix with (M) as its exact commutator-algebra. Now a
k
b
k
= 1
for all k or a
k
b
k
= 0 for all k is respectively equivalent to the fact that
T
0
is totally indenite or totally denite over R. Or, putting it in other
words, except when |T
0
|
1
= (by denition) is either totally positive
or totally negative in Z, there exist R-matrices with (M) as the exact
algebra of commutators.
We now deal with the last of the exceptional cases, namely when V
is a cyclic algebra of type (iv) and s = 2, q = 1. Thus V is a quaternion
algebra with centre Rwhich is obtained by adjoining =

a to a totally
real eld Z of degree g over Q and a > 0 is in Z. As before, we 101
can nd a totally real eld Z
0
= Z() with =

d and d > 0 in Z
such that Z = Z
0
() serves as a splitting eld for V . There are two
automorphisms in g which are identity on Z and commute with each
other, namely for Z,
= x + y

= x y(x, y R)
= p + q = p q(p, q Z
0
)
The algebra V is generated over Z by an element J which satises J
2
=
b R and J =

J. Further, there exists c Z


0
such that c

c = bb. For
Z
0
, the mapping

is an automorphism of Z
0
over Z.
For = + j V with , Z, we have over Z the representation
D =
_

b

_
. Further

D =
_

b
_
= FDF
1
where F =
_
0 1
b 0
_
. In
terms of D, the positive involution in V is expressed as
_

_

b

_
= D

D = F
1
D

F =
_

c
b

_
, F
1
= [c, c

c] > 0 (98)
7. Existence of R-matrices with given commutator-algebra 75
We obtain for V , a representation D
0
= (
1
E
2
)[DD](
1
E
2
)
1
where
1
=
_
1 1

_
. It is clear that D
0
=
_

b

_
where now , , b

,

stand for their 2-rowed representations over Z


0
with respect to the basis
1, . From this, we pass to a representation of V over Z given by
D = K
1
[D
0

D
0
]K
1
1
where K
1
=
2
E
4
,
2
=
_
1 1

_
and then to
the rational representation
(
3
E
8
)[D
(1)
, . . . , D
(g)
](
3
E
8
)
1
(99)
where
3
= (
(1)
k
),
1
, . . . ,
g
being a basis of Z over Q. 102
For the abstract algebra, we may start with the regular representation
of its opposite algebra and assume that for the elements of the commu-
tator algebra (F) of (V ), we already have the rational representation
of the form (99). (Let us remark that this arrangement is purely for the
sake of convenience in working. Even if we had started with the regular
representation (V ) of the abstract algebra V , the positive involution in
(V ) will correspond to the involution T

T =

F
1
T


F in (F). This
is dierent from (98) only in as much as F has to be replaced by

F but
observe that this involution is again positive).
Let T
0
=

T
0
be a nonsingular element of (F) and let T
0
= (
3

E
8
)[T
(1)
0
, . . . , T
(g)
0
](
3
E
8
)
1
where T
(k)
0
are dened as follows. Let T
0
have the representation
_

b

_
over Z and let Z
(k)
= Z
(k)
(

d
(k)
,

a
(k)
).
Dene
(k)
,
(k)
, (b

)
(k)
, (

)
(k)
to be the images of ,

, b

,

respectively
under the isomorphism of Z onto Z
(k)
taking Z onto Z
(k)
. Then
T
(k)
0
=
_

(k)

(k)
(b

)
(k)
(

)
(k)
_

_
(100)
where for the elements of T
(k)
0
, we have taken their regular representa-
tion over Z
(k)
. From

T
0
= T
0
, we obtain = , =
c
b

, b

c,

or , b

c. Let N = (
3
E
8
)[N
1
, . . . , N
g
](
3
E
8
)
1
be in (F) having the properties mentioned in (86) and analogous to
76 Chapter 1
(100), let N
k
=
_

k

k
b
(k)

k
_
where
k
,
k
are in the linear closure of 103
Z
(k)
and hence commute with elements of Z
(k)
. Then
k
=
k
and
b

=

c. Further N
k
has all its eigenvalues real and positive i.e.
k

k

b
(k)

k
> 0. Now R = T
1
0
N is a R-matrix in (F) and let again
R = (
3
E
8
)[R
1
, . . . , R
g
](
3
E
8
)
1
with R
k
=
_

k

k
b
(k)

k
_
. From
R
2
= E, we obtain,

2
k
+ b
(k)

k
= 1 =

2
k
+ b
(k)

k
, (
k
+

k
)
k
= 0 = (
k
+

k
)b
(k)

k
(101)
Let us denote

by . Then Z. We know that


(k)
is negative or positive according as a
k
b
k
= 0 or 1 respectively, in our
former notation. On the other hand, taking determinants, |R
k
|
(k)
= |N
k
|
which, in view of (86) is positive. Further, since R
2
= E, we have
|R
k
| = 1. Now, in (101), one of two possibilities arises, namely, either
a)
k
+

k
0, in which case
k
= 0,

k
= 0, or b)
k
=

k
.
In case a), using (101), we see that
k
=

k
=
1

a
(k)

(k)
. Since
we |R
k
| = 1, we see that
(k)
< 0 and thus case a) corresponds to the
situation when a
k
b
k
= 0 and then
R
k
=
1

a
(k)
_

(k)
0
0
(k)
_
(102)
When case b) occurs,
k
=

k
and therefore |R
k
| =
2
k
b
(k)

k
= 1,
in view of (101). Thus case b) corresponds precisely to the situation

(k)
> 0 or equivalently a
k
b
k
= 1. Thus in case b), R
k
=
_

k

k
b
(k)

k

k
_
and 104
R
k
contains a priori eight free real parameters. From N
k
= T
(k)
0
R
k
, we
obtain, dropping the inconvenient sux k and the superscript k every-
where without risk of confusion, that
= + b

, = ,

= b

, b

= b(

) (103)
Since = , = , = and 0, we can dene r = r, s = s by
7. Existence of R-matrices with given commutator-algebra 77
= r,

s. Then from (103), we have


b

( + s), b

= ( ) (104)
But, from N =

N, we have b

=

c and again, in view of (103),

c( ) = b(

) (105)
Multiplying both sides of (105) by and using (103), (104), we obtain
b

(r ) =

c
= b

+ b

(s + ) + b

(s ) + b

Thus
1
2
( ) =


1
2
(r s) (106)
While r, s and t =
1
2
( + ) are free, the imaginary part of is xed by
(106). We now set
u = b

r s
2
, v =
r + s
2
, q =

Then obviously q Z and furthermore, since b

c < 0 and
> 0, we have
q 1 < 0 (107)
From (106), we have = t + qu. From (104), we get
b

= s + = t + (u + v) (108)
and similarly 105
78 Chapter 1
b

= r = t + (v u) (109)
Thus t, u, v are real parameters which, in view of (108) and (109) are
subject to the conditions

t = t,

u = u,

v = v (110)
The relation
2
+ b

= 1 can be rewritten in terms of u, v, t as


2
+

= 1 and using (108) and (109), we obtain


(t + qu)
2
+ q(t + (v + u))(t + (v u)) = 1
i.e.
(1 q)t
2
aq(1 q)u
2
+ aqv
2
= 1 (111)
In view of (107), exactly one of aq and aq(1 q) is a negative while
the coecient of t
2
is positive. Further for t, u, v satisfying (111), r 0.
For, if r = 0, we should necessarily have (1q)t
2
aq(1q)
_

_
b

2
_

_
2
s
2
+
aq
s
2
4
= 1. But the left hand side is just (1 q)t
2

aq
2
4(1 q)
s
2
which
is always non-negative. We thus see that in the t, u, v-space, equation
(111) denes a two-sheeted hyperboloid.
Thus in the case when a
k
b
k
= 1, using (108) and (109), we have for
R
k
the parametrization
R
k
= V
k
_
t
k
+ qu
k

(k)
q(t
k
+
(k)
(u
k
+ v
k
))
t
k
+
(k)
(u
k
+ v
k
) t
k
qu
k

(k)
_
V
1
k
(112)
where V
k
= [1, (

)
(k)
]. 106
We proceed to discuss the algebra of commutators of the R-matrices
R. As before, we rule out the occurrence of case a) for all the g com-
ponents of R. Let then at least one component of R, say R
1
, be of the
form (112). If M is a rational matrix commuting with all R-matrices in
7. Existence of R-matrices with given commutator-algebra 79
(F), then M
1
= (
3
E
8
)
1
M(
3
E
8
) commutes with [R
1
, . . . , R
g
]
and by the same arguments as on p.99, M
1
= [M
(1)
11
, . . . , M
(g)
11
] where
M
11
= M
(1)
11
is an 8-rowed square matrix with elements in Z commut-
ing with
R
1
=
_
V
1
_
t
1
+ qu
1
q(t
1
+ (v
1
+ u
1
)
t
1
+ (v
1
u
1
) t
1
qu
1
_
V
1
1
,

V
1
_
t
1
qu
1
q(t
1
+ (v
1
u
1
)
t
1
+ (v
1
+ u
1
) t
1
+ qu
1
_

V
1
1
_
(113)
For the elements of the matrices in (113) of which R
1
is a direct sum,
we have taken the Z-rowed representation over the linear closure of Z
0
so that R
1
is an 8-rowed square matrix. Taking into account the rela-
tions

t
1
= t
1
,

u
1
= u
1
and

v
1
= v
1
and replacing t
1
by

dt
1
, u
1
by

du
1
, we see that M
11
has to commute with [V
1
,

V
1
]A[V
1
,

V
1
]
1
,
[V
1
,

V
1
]B[V
1
,

V
1
]
1
and [V
1
,

V
1
]C[V
1
,

V
1
]
1
where
A =
_

d
_
1 q
1 1
_
,

d
_
1 q
1 1
__
,
B =
_

d
_
q q
1 q
_
,

d
_
q q
1 q
__
C =
_

_
0 q
1 0
_
,
_
0 q
1 0
__
(114)
For the elements in the matrices in (114). We have taken the 2-rowed 107
representation over Z
0
.
Let us now suppose at least one of the components of R is of the
form (102). Then we can conclude as on p.99, M
11
has elements in R.
But now it is easy to verify that the matrices A, B, C dened by (114)
satisfy
A
2
= d(1 q)E, C
2
= aqE, AC = B = CA
and therefore generate a quaternion algebra over R. The matrices M
11
belong to the commutator algebra of over R and therefore constitute
an algebra of rank 8 over Z. We may then conclude that the algebra of
80 Chapter 1
all the rational matrices commuting with all the R-matrices in (F) is, in
this case, exactly 8g = 4h which is nothing but the rank of (M) over Q.
Finally, let us suppose that all the components of R are of the form
(112) i.e. a
k
b
k
= 1 for all k. Then M
11
is, as before, an 8-rowed square
matrix with elements in Z which commutes with the 8-rowed repre-
sentation of over Z. But this latter representation of contains the
irreducible representation of over Z exactly twice and therefore, it is
clear that the matrices M
11
generate an algebra of rank 16 over Z. It
is now immediate that the algebra of all rational matrices M commuting
with all the R-matrices in (F) is, in this case, equal to 16g = 8h which
is greater than the rank of (M) over Q.
Thus, in the case when V is of type (iv) and s = 2, q = 1, there exist 108
R-matrices with (M) as exact commutator-algebra unless T
0
is totally
- denite hermitian or totally indenite hermitian over R.
We shall say T
0
is skew-symmetric totally denite or totally indef-
inite according as T
0
is totally denite hermitian or totally indenite
hermitian.
We have thus completely solved our problem on Riemann matrices
and we may summarize our results in the following theorem. (We re-
mark that the matrix T
0
, which appears in the statement of Theorem 7, is
precisely the given non-singular matrix in (F) which is skew-symmetric
for the involution in (F) and A = G
q
T
0
is a principal matrix for our R-
matrices).
Theorem 7. With the notation of Theorem 5, there always exists a R-
matrix with the given A as principal matrix and having (M) = (g) as
the exact algebra of commutators except when
a) V = R with a positive involution of the rst kind, q is odd
b) V = P, q = 1.
c) V = P, q = 2, N
R
(|T
0
|) =
2
for > 0 in R.
d) V is of type (iv), q = s = 1 and there exists a proper subeld J of
V = R over which iT
0
is totally denite.
8. Modular groups associated with Riemann matrices 81
e) V is of type (iv), s = 1, q = 2 and T
0
is skew-symmetric totally
denite or totally indenite over V = R, and
f) V is of type (iv), s = 2, q = 1 and T
0
is skew-symmetric totally
denite or totally indenite over the centre R.
Remarks. (1) In solving our problem on R-matrices, we have al- 109
lowed for A the fullest possible generality; we emphasize that the
transformations which we performed on (F) there, to reduce A to
the simple form J, were merely to make the discussion easier and
constituted no diminution of the generality of A.
(2) Suppose V is of type (iv), Z = Q and qs = 2. Then there can-
not exist nonsingular T
0
=

T
0
in (F) which are neither skew-
symmetric totally denite nor skew-symmetric totally indenite
over R (which is now an imaginary quadratic extension of Q),
since there cannot exist in Q non-zero numbers which are neither
positive nor negative!
8 Modular groups associated with Riemann matri-
ces
In this concluding section, we shall make a close study of the space F
which we associated on p. 56 with the given division algebra V . We
shall see, for example, how far (M) = (V
q
) determines M and nd all
the principal matrices for a general R-matrix. Later we shall dene the
general modular groups which act on F as groups of transformations
of H onto itself. The scope of these lectures prevents us from making
a function-theoretic study of these modular groups analogous to some
recent work of I. I. Pyatetskii Shapiro ([13], [14]). We merely remark
that the preparatory material for this study is contained in [21] and [22].
We may rst briey recall how H was dened. We had rst a divi-
sion algebra V of rank hs
2
over Q, with centre R of degree h over Q
and carrying a positive involution. Further (M) was upto equivalence 110
over Q, a q-fold multiple of (V ), the rational hs
2
-rowed representation
82 Chapter 1
of V . In the algebra (V ), we had an involution D

D = F
1
D

F and
the matrix G
q
dened by
G
q
= F
q
M
0
> 0 (115)
was a positive symmetric matrix with M
0
being in (M) such that

M
0
=
F
1
q
M

0
F
q
. Further T
0
was a given nonsingular element of (F) (the
commutator algebra of (M)) for which

T
0
= F
1
q
T

0
F
q
= T
0
(116)
The matrix A dened by
A = G
q
T
0
(117)
was a nonsingular rational skew-symmetric matrix dening the Resati
involution M M

= A
1
M

A in (M). Our problem was rst to nd


R-matrices R in (F) for which
AR = S = S

> 0 (118)
(We recall that the matrix A in (118) is a principal matrix for R). Asso-
ciated with each such R-matrix R, we had dened an n-rowed Riemann
matrix P of the form (70), uniquely determined by R, upto a left sided
complex non-singular factor. We denoted by H, the set of Pof the form
(70) associated in this way. In the sequel, however, we shall denote by
H the set of R-matrices in (F) themselves.
So H depends, a priori, on (M), M
0
(M) given in (115) and
T
0
(F) given in (116). Given (M), we shall now see how far H is
determined by (M). For our subsequent discussion, we shall exclude 111
V from being of the type of the six exceptional cases mentioned in the
statement of Theorem 7. Hence H will always contain a R-matrix having
(M) as its exact commutator-algebra. Such a R-matrix shall be referred
to as a generic R-matrix. We now prove
Proposition 14. Let H be the space of R-matrices associated with (M),
M
0
and T
0
as above and H
1
with (M), M
1
and T
1
in a similar manner.
Then H = H
1
if H H
1
contains a generic R-matrix.
8. Modular groups associated with Riemann matrices 83
Before proving the proposition, we remark that if R in H also lies
in H
1
, then both A = F
q
M
0
T
0
and A
1
= F
q
M
1
T
1
are principal matrices.
The following proposition gives the form of all principal matrices for a
generic R-matrix R H. It is not hard to extend it also to the case when
the R-matrix is not necessarily irreducible.
Proposition 15. If A is a principal matrix for a generic R H, then any
other principal matrix A
1
of R is of the form AM, where M is a positive
element of (M) and conversely, A
1
= AM is a principal matrix for R,
for every such M (M).
Proof. From (118), we obtain S = AR = R

A, S
1
= A
1
R = R

A
1
and therefore A
1
A
1
R = RA
1
A
1
. But R being generic and A
1
A
1
being
rational, we see that A
1
A
1
= M (M). In the rst place, M

=
A
1
M

A = A
1
M

= A
1
A

1
= M. Further from S > 0, and
from S M = ARM = AMR = S
1
> 0, we see, by Lemma 2, that the
eigenvalues of M are real positive. In other words, A
1
= AM for a
positive element M of (M). Conversely, if M is a positive element of
(M) (M = M

), then A
1
= AM = M

A = A

1
and further A
1
R = 112
AMR = ARM is symmetric and positive by Lemma 2. We now give
the
Proof of Proposition 14. Let Rbe a generic R-matrix in HH
1
. Then, by
proposition 15, A
1
= AM for a positive element M in (M). If R
0
H,
then AR
0
> 0. But now A
1
R
0
= AMR
0
= AR
0
M is again positive
symmetric, using Lemma 2. Thus A
1
is a principal matrix for R
0
and
so R
0
H
1
. Thus H H
1
and similarly H
1
H which proves our
proposition.
In the set of T
0
of the form (116), we introduce an equivalence re-
lation as follows, namely, two such matrices T
0
are equivalent if they
dier by a factor K (Z) which is totally positive. We denote by [K
0
]
the equivalence class of K
0
.
Proposition 16. If H is the space of R-matrices associated with (M),
M
0
and T
0
as above, then H depends essentially only on (M) and [T
0
].
84 Chapter 1
Proof. Let H
1
be the space of R-matrices associated with (M), M
1
and
KT
0
where K is in (Z) and has positive eigenvalues. We shall show
that H = H
1
. Let R H. Then we know that F
q
M
0
T
0
R is symmetric and
positive. If we could show that
F
q
M
1
KT
0
R = (F
q
M
1
KT
0
R)

> 0 (119)
it would follow that H H
1
and then taking K
1
instead of K, the re-
verse inclusion would hold leading to H = H
1
. To prove (119), we rst
remark that from F
q
M
0
(F
q
M
0
)

> 0, F
q
M
1
= (F
q
M
1
)

= F
q
M
0
M
1
0
M
1
< 0,
it follows in view of Lemma 2 that M
1
0
M
1
(M) has positive eigenval- 113
ues. Hence the product M
1
0
M
1
K has again positive eigenvalues (since
they commute). Now (F
q
M
1
KT
0
R)

= (T
0
R)

F
q
M
1
= (T
0
R)

F
q
KM
1
=
(T
0
R)

F
q
M
0
M
1
0
M
1
K = F
q
M
0
T
0
RM
1
0
M
1
K = F
q
M
1
KT
0
R. Further since
F
q
M
1
KT
0
R is symmetric and F
q
M
0
T
0
R > 0, it follows that F
q
M
1
KT
0
R =
F
q
M
0
T
0
RM
1
0
M
1
K > 0 by Lemma 2.
Conversely, if (M), M
1
, T
1
lead to the same H, then we claim [T
1
] =
[T
0
]. For, let R be a generic R-matrix in H. Then F
q
M
0
T
0
and F
q
M
1
T
1
are both principal matrices for R and hence by Proposition 15, M
0
T
0
=
M
1
T
1
M for a positive element M (M) which means M
1
0
M
1
M =
T
0
T
1
1
. From Proposition 6, it follows that T
0
T
1
1
= M
1
0
M
1
M = K
in (R). From T
0
=

T
0
,

T
1
= T
1
, it follows that K (Z). Fur-
ther from F
q
M
1
= M

1
F
q
, F
q
M
0
= M

0
F
q
, it follows that (M
1
0
M
1
)

F
q
M
0
=
F
q
M
0
M
1
0
M
1
i.e. M
1
0
M
1
is symmetric under the given positive involu-
tion in (M). Moreover, by the same arguments as above, M
1
0
M
1
has
positive eigenvalues. Hence M
1
0
M
1
is a positive element in (M). Since
M
1
0
M
1
M = K is in the centre, it follows that the positive elements M
and M
1
0
M
1
in (M) commute. Hence K = K

and further K has all


eigenvalues positive. Thus T
0
T
1
1
(Z) and has all its eigenvalues
positive is [T
0
] = [T
1
].
From Proposition 15, we know that if A is a principal matrix for a
generic R in H, then any other principal matrix A
1
= AM where M is
8. Modular groups associated with Riemann matrices 85
a positive element of (M). We shall investigate the cases when A
1
A
1
is always rE where r > 0 in Q. This will be indeed true if the only
elements M (M) for which M

= M and all the eigenvalues are real


and positive and are precisely of the form rE where r > 0 and r Q, A 114
necessary condition for this is that Z = Q. But even if Z = Q, we know
that in the case when V = g or V is a noncommutative cyclic algebra,
there exist positive elements in (V ) other than the positive elements in
(Q). But in the case when V = H, (Z = Q) is of type (i) or (iv) or
V = P with H = Z = Q, the only positive elements in (M) are of the
form rE with r > 0 in Q. Therefore, in these cases, any two principal
matrices for H dier at most by a positive rational scalar factor.
We now go back to our denition of a multiplier of a Riemann matrix
P. We called an integral matrix M a multiplier of Pif PM = KPfor
a complex nonsingular K and later we relaxed the condition that M be
integral and allowed M to be rational and not necessarily non-singular.
We constructed in 6, Riemann matrices P with the given division al-
gebra (M) as exact algebra of multipliers. The integral matrices M in
this representation (M) form an order (U ) in (M) and P admits all
elements of (U ) as (integral) multipliers. One could ask the more dif-
cult question of constructing Riemann matrices P with (U ) as the
exact ring of multipliers. Now, when we say an integral multiplier of
P, it is necessary to mention the specic representation (M). For, an
integral matrix M in (M), will not, in general, go into an integral matrix
in a Q-equivalent representation C
1
(M)C. But it is true that (U ) will
go over into an order in C
1
(M)C. If C is a unimodular matrix and
C
1
(M)C = (M), then C
1
(U )C will again be equal to (U ). In this
connexion, it is then of interest to study the mappings R U
1
RU for
R H and unimodular U. This, as we shall presently see, leads us to the
general modular groups associated with (M). 115
Proposition 17. Let U be a unimodular matrix such that the map-
ping R U
1
RU is a mapping of H into itself where H is a space
of R-matrices associated with (M) as above. Then the mapping M
U
1
MU is an automorphism of (M). Further, the mapping R U
1
RU is onto H.
Proof. Let R be generic in H. Then U
1
RU H and by the very con-
86 Chapter 1
struction of H, U
1
RU admits elements of (M) as commutators. In
other words, the elements of U(M)U
1
commute with R. But R is
generic and the elements of U(M)U
1
are rational so that U(M)U
1

(M). By considerations of rank, we see that U(M)U


1
= (M), actu-
ally. Let R H; then we claim that R = U
1
R
1
U for some R
1
H. For,
the algebra U
1
(M)U leads us to another space H
1
of R-matrices having
U

AU for a principal matrix and admitting (M) = U


1
(M)U as alge-
bra of multipliers. But for the generic elements R of H, U
1
RU is again
generic and belongs to H H
1
. Thus by Proposition 14, H = H
1
and in
other words, the mapping R U
1
RU is onto H. The proposition is
proved.
From the working above, we see that, for a generic R H, both A
and U

AU are principal matrices. Hence by Proposition 15, U

AU =
AM for a positive element in (M). Rewriting this, we have (since U

=
A
1
U

A)
U

U = M, for a positive element M (M). (120)


If U is a unimodular matrix satisfying (120), then it is easy to verify
that the mapping R U
1
RU is a mapping of H onto itself and hence
U
1
(M)U = (M).
It is easy to verify that the 2n-rowed unimodular matrices U satis- 116
fying (120) for some positive element M (M) constitute a group
0
which is the most general form of the homogeneous modular group of
degree n. The group
0
contains a trivial normal subgroup consist-
ing of all U
0
, for which U
1
RU = R for every R H. For any
U
0
, the mappings R U
1
RU and R (MU)
1
RMU are the
same, whatever be M in . The group
0
/ is the most general form of
the inhomogeneous modular group of degree n.
It is trivial to see that for U , UM (M) for every M (M).
For, taking a generic R H, UMR = URM = RUM, i.e. UM (M).
We shall now dene two subgroups
1
,
2
of
0
such that
1
is of
nite index in
0
and
2
is of nite index in
1
and each one of them
containing .
Now, under the automorphism M U
1
MU of (M), the centre
(R) is taken onto itself i.e. U
1
(R)U = (R). But the centre R being an
8. Modular groups associated with Riemann matrices 87
algebraic number eld of nite degree over Q, admits only nitely many
automorphisms over Q and we dene
1
to be the subgroup of U
0
which correspond to the identity automorphism of R. In other words,

1
=
_
U
0
_
U
1
KU = K, for every K (R)
_
It is easy to see that
1
is of nite index in
0
. Moreover
1
;
for, if K (R) and U , then, by our remark above, U (M) and
therefore KU = UK. We call the group
1
, the homogeneous modular
group of degree n in the wide sense and the quotient group
1
/, the 117
inhomogeneous modular group of degree n in the wide sense.
If U
1
, we see than that the mapping M U
1
MU of (M)
is an automorphism of (M) which is identity on the centre (R). Thus,
by Skolems Theorem (23), here exists M
1
(M) such that for every
M (M), we have U
1
MU = M
1
1
MM
1
i.e. UM
1
1
M = MUM
1
1
. In
other words, UM
1
1
= T
1
(F), or
U = M
1
T
1
= T
1
M
1
with T
1
(F), M
1
(M) (121)
The decomposition (121) of U
1
is clearly not unique. Now U

U =
M
0
for a positive element M
0
(M) and this gives T

1
M

1
M
1
T
1
= M
0
or T

1
T
1
= (M
1
1
)

M
0
M
1
1
= M
2
in (M). Since M
0
is a positive element
in (M), so is M
2
, by Proposition 12. But since M
2
(M) (F) = (R)
and since M
2
= M

2
, it is immediate that M
2
represents a totally positive
number in Z. Thus, for T
1
in (121), we have
T

1
T
1
= K
1
totally positive in (L) (122)
Suppose for U
1
, we have two decompositions as in (121), say
U = T
1
M
1
= T
2
M
2
. Then it is immediate that T
1
= T
2
K for some K
(R). We now claim that in the decomposition U = T
1
M
1
as in (121), we
can, by replacing T
1
, M
1
respectively by T
1
K
1
, KM
1
with suitable K
(R), ensure that KM
1
is integral and furthermore that T
2
= T
1
K
1
has
the following property, namely, there exists d > 0 in Z (depending only
on (M) and not on T
2
) for which dT
2
is integral. Thus in (121), we can
suppose already that M
1
is integral and T
1
if of bounded denominator 118
88 Chapter 1
(We shall briey sketch a proof of this fact in a special case. Let V be an
indenite quaternion algebra over Q and q = 1. Then V has a splitting
eld Z = Q(

a) with a < 0 in Z. For a element = + j V with


, Z and j
2
= b(> 0) in Z, we have the representation (M) of V
given by M = K
1
[D D]K
1
1
where D =
_

b
_
, P
1
=
_
1 1

a
_
,
K
1
= P
1
E
2
. The commutator algebra (F) of (M) is precisely the set
of T = K
1
_
E
2
F
F E
2
_
K
1
1
where , Z and F =
_
0 1
b 0
_
. Let T (F)
be such that
T M = U ()
with M(M) and U, unimodular. In (), we can suppose that M is
integral already, by replacing T, M respectively by m
1
T, mM for a
suitable m Z. Let
1
,
2
be a basis over Z for the integers in Z.
Denote the matrix
_

1

1

2

2
_
by P and P
1
P
1
by P
2
. We can nd
1
,

2
Z such that
1
P
1
,
1
P
1
1
,
2
P
2
,
2
P
1
2
have elements which are
integers in Z. Since M = K
1
_
D 0
0 D
_
K
1
1
and U = K
1
_
E
2
F
F E
2
_
K
1
1
M
are both integral, we see that
2
1
D,
2
1
D,
2
1
FD,
2
1
D are all inte-
gral. Let G
1
, . . . , G
h
0
be xed integral ideals (say, of minimum norm)
in the h
0
ideal classes of Z. Then there exists Z and an ideal
G

(1 h
0
) such that
2
1
D =
_

1

1
b
1

1
_
and
1
,
1
, b
1
,
1
have the
greatest common divisor G

. Dene T
1
= (
1

2
)
2
K
1
_
E
2
F
F E
2
_
K
1
1
119
and M
1
= (
1

2
)
2
K
1
_

1
D 0
0
1
D
_
K
1
1
. It is clear that M
1
is in (M) and is
integral; further T
1
M
1
= U. Moreover, if we dene d = b
4
1

2
2
h
0

k=1
N(G
k
)
(where N(G
k
) denotes the norm of G
k
over Q), we see that dT
1
is inte-
gral).
Let us denote by
2
, the subgroup of U
1
for which there is a
decomposition of the form (121) with unimodular T
1
and M
1
in (F)
and (M) respectively. We now prove
Proposition 18. The group
2
is of nite index in
1
.
Proof. Let U
1
, U
2
be in
1
and U
1
= T
1
M
1
, U
2
= T
2
M
2
be the decom-
positions of U
1
, U
2
as in (121). Now, as we remarked, M
1
, M
2
may be
8. Modular groups associated with Riemann matrices 89
supposed to be integral. We shall now prove that if M
1
M
2
(mod d),
then U
1
2
U
1

2
. Since the number of residue classes of 2n-rowed inte-
gral square matrices modulo d, is nite, it will follow that
2
is of nite
index in
1
. So let
M
1
M
2
(mod d) (123)
It is clear that dM
1
1
= dU
1
1
T
1
and dM
1
2
= dU
1
2
T
2
are integral.
But from (123), we have dE
2n
dM
2
M
1
1
(mod d) which means that
M
2
M
1
1
is integral. In a similar way, M
1
M
1
2
is also integral so that
M
2
= WM
1
with unimodular W. But now U
2
U
1
1
= T
2
M
2
M
1
1
T
1
1
=
T
2
T
1
1
W so that T
2
T
1
1
is itself unimodular in (F). Thus U
2
U
1
1
is in

2
which is what we sought to prove.
If U , then U (M) and therefore
2
.
We dene now another group

2
consisting of unimodular matrices 120
T
1
(F) for which T

1
T
1
= K which represents a totally positive unit in
L. It is clear that

2

2
. Dening

2
as the subgroup of

2
consisting
of unimodular U (R), we see that

2
.
Any U
2
is of the form T
1
M
1
with unimodular T
1
in (F) satis-
fying (122) and unimodular M
1
in (M). Since T
1
and T

1
in this decom-
position commute, we get, by iteration,
(T

1
)
1
T
1
1
= (T
1
1
)

T
1
1
= K
1
1
(124)
for every positive integer l. Now although T
1
is unimodular, T

1
is not
necessarily integral so that K
1
is not necessarily integral. But since dT
l
1
is integral and A is xed, we see that K
l
1
is of bounded denominator for
every l > 0, from (124). This is impossible, unless K
1
represents an
integer in (Z). By the same argument, we can show that K
1
is integral
so that K
1
is actually a (totally positive) unit in (Z). Thus for U =
T
1
M
1

2
with T
1
(F), we see rst that T
1

2
and furthermore,
for any T
1

2
.
T

1
T
1
= K
1
, a totally positive unit in (Z) (125)
We construct a mapping of
2
into

2
/

2
by dening (U) =
the coset of

2
modulo

2
containing T
1
where T
1
in (F) occurs in
90 Chapter 1
the decomposition U = T
1
M
1
. It is clear that is well-dened, for if
U = T
1
M
1
= T
2
M
2
, then T
1
2
T
1

2
by using (125). Further clearly
is a homomorphism of
2
onto

2
/

2
, the kernal being exactly . Thus
we have proved that

2
/ is isomorphic to

2
/

2
The group

2
/

2
is referred to as the inhomogeneous modular group 121
of degree n.
Finally, we dene the group

3
as the subgroup of T

2
for which
T

T = E (126)
and

3
as the subgroup of K in

3
for which K (R). It is easy to
see that

3
is precisely the set of roots of unity in (F) which belong to
the order (U ) in (M) and therefore,

3
is nite. Although, in view of
(117), the denition (126) of

3
apparently depends on G
q
, it is trivial to
verify that (126) depends only on F
q
.
Proposition 19. The group

3
/

3
is of nite index in

2
/

2
.
Proof. Let E be the group of all totally positive units in (Z), E
1
=
E (U ) and E
2
, the group of squares of elements in E
1
. By Dirichlets
theorem on units in algebraic number elds, there exist nitely many
elements L
1
, . . . , L
a
of E such that any K in E is of the form K = NL

for some N E
2
and some L

. Let now T
1

2
satisfy (125) and let
K
1
= N
2
1
L

for some N
1
E
1
and some L

. Thus
(N
1
1
T
1
)

(N
1
1
T
1
) = L

(127)
On the other hand, let A

unimodular in (F) be a xed matrix satisfying


A

= L

, for 1 a. Then clearly N


1
1
T
1
A
1

3
i.e. T
1
= N
1
BA

for N
1
(Z), B

3
and one of the nitely many matrices A
1
, . . . , A
a
.
It is immediate using (127) that

3
/

3
is of nite index in

2
/

2
.
8. Modular groups associated with Riemann matrices 91
The group

3
is called the homogeneous modular group of degree n 122
in the restricted sense. The quotient

3
/

3
is the inhomogeneous mod-
ular group in the restricted sense.
The groups

2
/

2
and

3
/

3
occur in the literature already in spe-
cial cases.
In face, taking V = R, a totally real eld over Q, q = 2 and with
obvious restrictions on (M) we see that they are nothing but the inho-
mogeneous Hilbert modular group over R, in the wide sense and in the
narrow sense respectively.
It might be of interest to construct fundamental regions for these
groups in H and study the automorphic functions on H relative to these
groups. We refer the interested reader to some recent work of K.G.
Ramanathan (15) in this direction.
We might conclude with an outline of a method of constructing a
fundamental region in the H-space, for one of the groups above, say

3
. The group

3
acts on H as follows; namely, to T

3
corre-
sponds the mapping R T
1
RT of H onto itself. Let A be a prin-
cipal matrix for H. We simplify our problem by considering the ma-
trices AR = S = S

> 0 (for R H). In terms of S , the mapping


R T
1
RT is just the mapping S T

S T. By Minkowskis reduc-
tion theory for unimodular matrices acting on the space of 2n-rowed
real symmetric positive-denite matrices, we know that corresponding
to the given S , there exists a unimodular matrix T such that S
1
= T

S T
lies in the reduced Minkowski domain F
2n
. But T may not belong to 123

3
. On the other hand, we know that R
2
= E
2n
i.e. A
1
S A
1
S = E
2n
i.e. A

S
1
A = S i.e. (T

AT)

S
1
1
(T

AT) = S
1
. Again, since S
1
is re-
duced in the sense of Minkowski, we conclude by a theorem of Siegel
(Satz 5, p.200 [20]) that T

AT belongs to a nite set of matrices, say


T

1
AT
1
, T

2
AT
2
, . . . , T

AT

. Now, for any reducing matrix T obtained


as above, we have T

AT = T

k
AT
k
for some
T
k
(1 k ).i.e. (TT
1
k
)

A(TT
1
k
) = A.
In other words, (TT
1
k
)

(TT
1
k
) = E i.e. TT
1
k

3
. It may now be
92 Chapter 1
veried as usual that
F =

_
k=1
(A
1
T
1
k
F
2n
T
1
k
H)
is a fundamental region for

3
in the H-space.
Bibliography
124
[1] A.A. Albert : Structure of Algebras, New York, 1939.
[2] : On the construction of Riemann matrices I, Ann. of Math.,
pp.1-28, Vol.35 (1934).
[3] : A solution of the principal problem in the theory of Riemann
matrices, Ann. of Math., pp. 500-515, Vol.35 (1934).
[4] : On the construction of Riemann matrices II, Ann. of Math.
pp.376-394, Vol.36 (1935).
[5] : Involutorial simple algebras and real Riemann matrices, Ann.
of Math., pp.886-964 Vol. 36 (1935).
[6] : On involutorial algebras, Proc. Nat.Acad. Sci. U.S.A.,
pp.480-482, Vol.41, (1955).
[7] R. Brauer, H. Hasse and E. Noether : Beweis eines Hauptsatzes
in der Theorie der Algebren, Crelles Journal, pp.399-404, vol.167
(1931).
[8] M. Deuring : Algebren, Chelsea, 1948.
[9] H. Hasse : Theory of cyclic algebras over an algebraic number
eld, Trans. A.M.S., pp.170-214, Vol.34 (1932).
[10] G. Humbert : Sur les fonctions ab eliennes singuli` eres, Oeuvres,
pp.297-498, t.II, 1936.
93
94 BIBLIOGRAPHY
[11] S. Lefschetz : On certain numerical invariants of algebraic vari- 125
eties with applications to abelian varieties, Trans. A.M.S., pp.327-
482, Vol.22 (1921).
[12] H. Poincar e : Sur la r eduction des int egrales ab eliennes, Oeuvres,
pp.333-351, t.III, 1934.
[13] I.I. Pyatetskii-Shapiro : Singular Modular functions, Izv. Akad.
Nauk SSSR, Ser. mat. pp.53-98, Vol.20 (1956) (also A.M.S. Trans-
lations, Ser. 2., pp.13-58, Vol.10 (1956)).
[14] : Theory of modular functions and related questions in the
theory of discrete groups, Uspekhi Math. Nauk, pp.99-136, Tom.
XV, No.1 (1960). (also Russian Math. Surveys, pp.97-128, Vol.XV
(1960)).
[15] K.G. Ramanathan : Quadratic forms over involutorial division al-
gebras II, Math. Ann. pp.293-332, Bd. 143 (1961).
[16] B. Riemann : Theorie der abelschen Funktionen, Gesamm. Math.
Werke, pp.88-142, Dover, 1953.
[17] C.Rosati : Sulle matrici di Riemann, Rend. Circ. Mat. Palermo,
pp.79-134, t.53 (1929).
[18] G. Scorza : Intorno alla teoria generale delle matrici di Riemann,
Rend. Circ. Mat. Palermo, pp.263-380, t.41 (1916).
[19] C.L. Siegel : Darstellung total positiver Zahlen durch Quadrate, 126
Math. Zeit. pp.246-275, Bd.11(1921).
[20] : Einheiten quadratischer Formen, Abh. math. Sem. Hans.
Univ., pp.209-239, Bd. 13(1940).
[21] : Discontinuous groups, Ann. of Math., pp.674-689, Vol.44
(1943).
[22] : Die Modulgruppe in einer einfachen involutorischen Alge-
bra, Festschrift Akad. Wiss. G ottingen, 1951.
BIBLIOGRAPHY 95
[23] T. Skalem : Zur Theorie der assoziativen Zahlensysteme, Skr.
Norske Vid. - Akad., Oslo, pp.21-22, 1927.
[24] J.H.M. Wedderburn : On division algebras, Trans. A.M.S., pp.129-
135, Vol. 22(1921).
[25] A. Weil : Algebras with involutions and the classical groups, Jour.
Ind. M.S., pp.589-623, Vol.24 (1960).
[26] : Introduction ` a l etude des vari et es k ahl eriennes, Hermann,
1958.
[27] H. Weyl : On generalized Riemann matrices, Ann. of Math.
pp.714-729, Vol. 35 (1934).
[28] : Generalized Riemann matrices and factor sets, Ann. of Math.
pp.709-745, Vol.37(1936).

Das könnte Ihnen auch gefallen