Sie sind auf Seite 1von 24

PHYSICAL REVIEW E 85, 061130 (2012)

Wigner surmise for mixed symmetry classes in random matrix theory


Sebastian Schierenberg, Falk Bruckmann, and Tilo Wettig
Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg, Germany
(Received 9 March 2012; published 27 June 2012)
We consider the nearest-neighbor spacing distributions of mixed random matrix ensembles interpolating
between different symmetry classes or between integrable and nonintegrable systems. We derive analytical
formulas for the spacing distributions of 2 2 or 4 4 matrices and show numerically that they provide very
good approximations for those of random matrices with large dimension. This generalizes the Wigner surmise,
which is valid for pure ensembles that are recovered as limits of the mixed ensembles. We show how the coupling
parameters of small and large matrices must be matched depending on the local eigenvalue density.

DOI: 10.1103/PhysRevE.85.061130 PACS number(s): 02.50.r, 05.45.Mt

I. INTRODUCTION with = 1,2,4 corresponding to the Gaussian orthogonal


ensemble (GOE), Gaussian unitary ensemble (GUE), and
Random matrix theory (RMT) is a powerful mathematical
Gaussian symplectic ensemble (GSE) of RMT, respectively.
tool which can be used to describe the statistical behavior of
The quantities a and b are chosen such that
quantities arising in a wide variety of complex systems. It has
 
been applied to many mathematical and physical problems
with great success; see [13] for reviews. This wide range of ds P (s) = 1 and s = ds P (s) s = 1 (1.2)
0 0
applications is based on the fact that RMT describes universal
quantities that do not depend on the detailed dynamical in all three cases. Explicit formulas will be given in Sec. II.
properties of a given system but rather are determined by RMT describes quantum systems whose classical coun-
global symmetries that are shared by all systems in a given terparts are chaotic [9] and correctly predicts the strong
symmetry class. short-range correlations of the eigenvalues due to interactions.
In RMT the operator governing the behavior of the system, In contrast, the level spacing distribution of a quantum system
such as the Hamilton or Dirac operator, is replaced by a random whose classical counterpart is integrable is given by that of a
matrix with suitable symmetries. One then studies statistical Poisson process,
properties of the eigenvalue spectrum of such random matrices,
P0 (s) = es , (1.3)
typically in the limit of large matrix dimension. To compare
different systems in the same symmetry class with RMT, the corresponding to uncorrelated eigenvalues. We assign the
eigenvalues of the physical system as well as those of the Dyson index = 0 to ensembles of this kind, which is a
random matrices need to be unfolded [4]. The purpose of consistent extension of the generalized Gaussian ensembles
such an unfolding procedure is to separate the average behavior with arbitrary real > 0 introduced in Ref. [10].
of the spectral density (which is not universal) from the spectral Often physical systems consist of parts with different
uctuations (which are universal). Unfolding is essentially a symmetries, or of a classically integrable and a chaotic
local rescaling of the eigenvalues, resulting in an unfolded part. Changing a parameter of the system may then result
spectrum with mean level spacing equal to unity. How the in transitions between different symmetry classes. Now,
rescaling is to be done is not unique and may depend on the the question is whether a symmetry transition in a given
system under study. physical system can be described by a transition between
In this paper we focus on the so-called nearest-neighbor RMT ensembles (or Poisson). It has been shown in numerous
spacing distribution P (s), i.e., the probability density to nd studies that this is indeed the case. For example, billiards are
two adjacent (unfolded) eigenvalues at a distance s. This showcases for the interplay of chaos and integrability, and
quantity probes the strength of the eigenvalue repulsion due to certain billiards exhibit PoissonGOE transitions [1114]. A
interactions and can be computed analytically for the classical transition between GOE and GUE behavior takes place in
RMT ensembles, resulting in rather complicated expressions the spectrum of a kicked top [15] or kicked rotor [16] when
given in terms of prolate spheroidal functions [5]. However, time-reversal symmetry is gradually broken. Furthermore, a
it was realized early on that the level spacing distribution of transition from Poisson to GOE statistics was found for random
large random matrices is very well approximated by that of points on fractals as the dimension is changed [17]. In the
2 2 matrices in the same symmetry class.1 For most practical spectrum of the hydrogen atom in a magnetic eld, transitions
purposes it is sufcient to use this so-called Wigner surmise [8] were observed from Poisson to GOE [18] as well as from
instead of the exact analytical result. It is given by GOE to GUE [19]. Transitions from Poisson to GOE or
GUE statistics also occur in condensed matter physics, e.g.,
P (s) = a s eb s ,
2
(1.1) in the metal-insulator (Anderson) transition [20,21] whose
properties are similar to those of the Brownian motion model
introduced in Ref. [22]. In relativistic particle physics the Dirac
operator shows transitions between different chiral symmetry
1
This does not work for non-Hermitian complex matrices [6,7]. classes [23] or an Anderson-type transition [2426]. In the

1539-3755/2012/85(6)/061130(24) 061130-1 2012 American Physical Society


SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

spectra of nuclei a transition between GOE and Poisson This paper is organized as follows. In Sec. II we derive
spectral statistics takes place when levels sequences with analytical results for P (s) for small matrix sizes. If H 
different exact quantum numbers are mixed [4]. We thus is from the GSE (i.e., H  is self-dual) we construct in
conclude that RMT is broadly applicable not only to pure Secs. II D, II F, and II G self-dual matrices H to maintain
systems but also to mixed systems. the Kramers degeneracy. In Sec. II H we consider the case
In this paper, we assume the Hamiltonian describing the where a 4 4 GSE matrix is perturbed by a non-self-dual
mixed system to be of the form2 GUE matrix. Section III provides strong numerical evidence
that the results obtained in Sec. II approximate the spacing
H = H + H  , (1.4) distributions of large random matrices very well. We give a
where H represents the original system whose symmetry perturbative argument for the matching of the couplings used
or integrability is broken by the perturbation H  for small for the Wigner surmise and for large matrices, respectively,
coupling parameter , and vice versa for large . For the and derive an approximate result that involves the eigenvalue
quantities we analyze the absolute scale of H is irrelevant; density. This result describes the numerical data rather well.
only the relative scale between the different parts matters. We also show that the transitions from the GSE to either a
From the level statistics point of view, H and H  non-self-dual Poissonian ensemble or the GOE proceed via an
correspond either to a Poisson process or to one of the three intermediate transition to the GUE and can also be described by
RMT ensembles. Hence, there are ( 24 ) = 6 possibilities for the surmises calculated in Sec. II. We summarize our ndings
and conclude in Sec. IV. Technical details are worked out in
a transition between two of these four cases in Eq. (1.4),
several Appendices.
i.e., PoissonGOE, PoissonGUE, PoissonGSE, GOEGUE,
GOEGSE, and GUEGSE. If a GSE matrix is involved in
the transition, there are two possibilities for the other matrix: II. SPACING DISTRIBUTIONS FOR SMALL MATRICES
self-dual or not.3 This leads to an even larger variety of mixed
A. Preliminaries
ensembles. Many transitions of this kind have been studied in
earlier works, usually for large matrix dimension. Transitions In the spirit of the Wigner surmise, we now calculate
between Gaussian ensembles are considered in Ref. [5], but the distributions P (s) of eigenvalue spacings s of mixed
closed forms for the spacing distribution could not be obtained, ensembles for the smallest nontrivial (i.e., 2 2 or 4 4)
and self-dual symmetry was not conserved in the transitions matrices, with P (s) normalized as in Eq. (1.2). Unfolding
involving the GSE. Mixtures of Gaussian ensembles with is not needed for these matrices since they have only two
conserved self-dual symmetry and small matrix size are independent eigenvalues (except for Sec. II H). We rst study
considered in Ref. [29], but only numerical results are given for the transitions from the integrable to the chaotic case for the
the spacing distributions. Other examples include the heuristic three Gaussian ensembles and then proceed to the transitions
Brody distribution [30] interpolating between Poisson and between different symmetry classes.
the GOE, the spacing distribution of a generalized Gaussian We dene the 2 2 Poisson process by a matrix
ensemble of 2 2 real random matrices [31], and a complete  
study of the transition between Poisson and the GUE [32]. The 0 0
H0 = , (2.1)
two-point correlation function of the latter case is also studied 0 p
in Ref. [33].
Note that an exact analytical calculation of P (s) for systems where p is a Poisson distributed non-negative random number
described by an ansatz of the form (1.4) is much harder than, with unit mean value; i.e., its probability density is P0 (p) =
e.g., the analytical calculation of low-order spectral correlation ep . The eigenvalue spacing of this matrix is obviously Poisso-
functions, which are already difcult to obtain. Here, we do nian, as the spacing is just p, and therefore we obtain Eq. (1.3).
not attempt an analytical calculation of P (s) for large matrix The choice of H0 may look like a special case, but it
dimension. Rather, motivated by the reliability of the Wigner sufces for our purposes. The most general Hermitian 2 2
surmise, we study the possible transitions in Eq. (1.4) for matrix with spacing p can be obtained from Eq. (2.1) by a
2 2 matrices (or, in the symplectic case, 4 4 matrices, common shift of the eigenvalues (which does not inuence
because the smallest nontrivial self-dual matrix has this size) the spacing) and a basis transformation. This transformation
and compare the resulting level spacing distributions with can be absorbed in the perturbing matrix since it does not
that of large random matrices, the latter obtained numerically. change the probability distribution of the latter. To see this,
The cases of PoissonGOE and GOEGUE were worked out suppose we had started with a general nondiagonal H0 , also
earlier by Lenz and Haake [15], and the spacing distribution with eigenvalues 0 and p, instead of Eq. (2.1). When added
of a 2 2 matrix interpolating between Poisson and GUE is to a random matrix H  with  = 1,2,4, we choose it to
given in Ref. [34]. These cases will briey be reviewed below, be real symmetric, Hermitian, or self-dual, respectively, in
and the remaining ones are the main subject of this work. order to preserve the symmetry properties of H  . Then H0
is diagonalized by a suitable matrix ; i.e., diag(0,p) =
1 H0 , where  is orthogonal ( = 1), unitary ( = 2), or
symplectic ( = 4). In the total matrix H this is equivalent to
2
Other possibilities have also been investigated (see, e.g., [5,27,28]) 1 H   perturbing diag(0,p), but the probability distribution
but will not be considered in this paper. of the perturbation is invariant under the transformation .
3
An even-dimensional matrix A is called self-dual if J AT J T = A, For matrices H1,2,4 from the GOE, GUE, and GSE,
with J given in Eq. (G4). respectively, we choose the mean values of the matrix elements

061130-2
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

1.4 1.4 1.4


Poisson Poisson Poisson
1.2 GOE 1.2 GUE 1.2 GSE
interpolations interpolations interpolations
1 1 1

0.8 0.8 0.8


P (s)

P (s)

P (s)
0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
s s s

FIG. 1. (Color online) Spacing distributions P0  (s; ) of the transitions Poisson GOE [left, Eq. (2.4)], GUE [middle, Eq. (2.12)] and
GSE [right, Eq. (2.26)] for 2 2 or 4 4 matrices with coupling parameters = 0.02, 0.08, 0.2, 0.4, and 0.8 (maxima moving from left to
right as increases). In the GSE case the matrix representing the Poisson process was made self-dual. All formulas were veried by comparison
to numerically obtained spacing distributions of 2 2 or 4 4 random matrices.

to be 0 and the normalization to the one given in Ref. [15], but our integration variable x is
 ()  2
scaled differently.
[(H1,2,4 )ii ]2  = 2 H1,2,4 = 1. (2.2)
i=j In the limiting cases of 0 and we have
The index = 0, . . . , 1 distinguishes the components of
1/(2) for 0,
the complex GUE or quaternion GSE matrix elements, while D() (2.7)
/2 for .
the GOE matrix elements possess only a real part.
All results we derive from Eq. (1.4) will be symmetric in Using the asymptotic expansion of the Bessel function, it is
since the distribution of the elements of H  is symmetric about straightforward to show that for 0 we obtain the Poisson
zero (the perturbation will be taken from one of the Gaussian result es . It is even simpler to show that the Wigner surmise
( s/2) es /4 for the GOE is obtained for .
2
ensembles in each case). This means that our results should be
expressed in terms of ||. To avoid such cumbersome notation The small-s behavior of P01 (s; ) shows interesting
we restrict ourselves to non-negative . features. To investigate this behavior, we consider separately
the cases = 0 and > 0. For = 0 we have by construction
B. Poisson to GOE P01 (s; 0) = es = 1 s + O(s 2 ). (2.8)
We rst consider the case that corresponds to a classically For > 0 we obtain from Eq. (2.4)
integrable system perturbed by a chaotic part with antiunitary
symmetry squaring to 1. The integrable part is represented by a P01 (s; ) = c()s + O(s 3 ), (2.9)
Poisson process, and the chaotic one by the GOE. The spacing with
distribution for this case has been derived in Ref. [15], and we
state it here for the sake of completeness.
c() for 0, (2.10)
The 2 2 random matrix 2
    which means that we recover the linear level repulsion of the
0 0 a c
H = H0 + H1 = + (2.3) GOE for arbitrarily small , i.e., for arbitrarily small admixture
0 p c b of the chaotic part as also observed in Refs. [3638]. This
consists of H0 from (2.1) and H1 from the GOE, i.e., a real implies that for 0 the distribution, viewed as a function
symmetric matrix with normalization given in Eq. (2.2). The of , develops a discontinuity at s = 0, since P01 (s = 0; =
calculations are very similar to the ones for the transition from 0) = 1 while P01 (s = 0; > 0) = 0. This effect is clearly
Poisson to the GSE, which are presented in Sec. II D (see also seen in Fig. 1 (left).
Appendix A). The resulting spacing distribution of H reads For small values of and s, we observe something
   reminiscent of the Gibbs phenomenon, i.e., the interpolation
D 2 s 2
2
x 2 x xDs overshoots the Poisson curve considerably. In the limit of
P01 (s; ) = Cs e dx e 4 I0 , (2.4)
0 0, one can show (see Appendix B 2) that the maximum of P01
with is at smax = 2.51393 with a nite value of P01 (smax ;
  0) = 1.17516. This implies an overshoot of 17.5% compared
1 to the Poisson curve. Such an effect also occurs in the
D() = U ,0,2 , (2.5)
2 2 transitions from Poisson to GUE and GSE that are treated
C() = 2D()2 , (2.6) in Secs. II C and II D below, with, respectively, quadratic and
quartic level repulsion in the small-s regime.
where U is the Tricomi conuent hypergeometric function (or The large-s behavior of P01 (s; ) is analyzed in Ap-
Kummer function) [[35], Eq. (13.1.3)] and I0 is a modied pendix A, and we obtain Poisson-like behavior for any nite
Bessel function [ [35], Eq. (9.6.3)]. P01 (s; ) is plotted in [see Eq. (A7)]. This is in contrast to the small-s behavior,
Fig. 1 (left) for various values of . The formula is equivalent which is GOE-like for any nonzero .

061130-3
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

C. Poisson to GUE The integral in Eq. (2.12) can be computed numerically


We now consider the transition from Poisson to the GUE. without difculties as the integrand decays like a Gaussian
This corresponds to a classically integrable system with a for large x and becomes constant for small x.4 The resulting
chaotic perturbation without antiunitary symmetry. The 2 2 distribution P02 (s; ) is plotted in Fig. 1 (middle).
random matrix As in Sec. II B, a discontinuity is found at s = 0 toward the
    Poisson result. For > 0 we obtain from Eq. (2.12)
0 0 a c0 + ic1
H = H0 + H2 = +
0 p c0 ic1 b P02 (s; ) = c()s 2 + O(s 4 ), (2.19)
(2.11) with
contains H2 from the GUE, i.e., a complex Hermitian matrix 1
with normalization (2.2). The spacing distribution of an c() for 0. (2.20)
22
equivalent setup with different normalizations of the random
Hence we obtain the quadratic level repulsion of the GUE
matrix elements was already considered in Ref. [34], so we
for arbitrarily small coupling parameter. For 0, the
just state the result,
 maximum of the function is at smax = 3.00395 , with a value
x2 sinh z of P02 (smax ; 0) = 1.28475 (see Appendix B 2).
P02 (s; ) = Cs e 2 D 2 s 2
dx e 42 x , (2.12)
0 z The large-s behavior of P02 (s; ) is given by Eq. (A7);
i.e., it is Poisson-like.
with z = xDs/ and
1 1 2
D() = + e erfc() Ei(2 ) D. Poisson to GSE
2 2
2   In this case, a classically integrable system is perturbed by
2 1 3 3
+ 2 F2 ,1; , ; 2 , (2.13) a chaotic part with antiunitary symmetry squaring to 1 and
2 2 2 hence represented by the self-dual matrices of the GSE. One
4D()3 has to consider 4 4 matrices here, because a self-dual 2 2
C() = . (2.14) matrix is proportional to 12 and has only one nondegenerate

eigenvalue. As mentioned in Sec. I, there are now two
Here, erfc is the complementary error function [ [35], possibilities: The Poisson process could be represented by
Eq. (7.1.2)], Ei is the exponential integral [ [35], a self-dual or a non-self-dual matrix. Here we only consider
Eq. (5.1.2)], and 2 F2 is a generalized hypergeometric function the former possibility, while the latter will be discussed in
[39, Eq. (9.14.1)]. We could also have written
the result in the Sec. III E. A self-dual Poisson matrix is obtained by taking a
form of Eqs. (A2) and (A3) since sinh z = z/2 I1/2 (z). tensor product of Eq. (2.1) with 12 . Thus the transition matrix
To check the validity of Eq. (2.12) and to see the emergence is
of the limiting spacing distributions, we now consider the
limits 0 and . First note that for 0 we have 0 0 0 0
0 0 0 0
1 1
D and C 3 (2.15) H = H0 12 + H4 =
2 2 0 0 p 0
0 0 0 p
so that Eq. (2.12) becomes for s > 0
 a 0 c0 + ic3 c1 + ic2
s2 sinh z
dx e 42 (s +x )x 0 c1 + ic2 c0 ic3
1 2 2
P02 (s; 0) = lim 3 a
0 2 0 z + ,
 c0 ic3 c1 ic2 b 0
s ex 1  (sx)2 2 (s+x)2 
= dx lim e 4 e 42 c1 ic2 c0 + ic3 0 b
2 0 x 0
 

=2 [(sx)(s+x)]
(2.21)

= es , (2.16) where the GSE matrix H4 is Hermitian and self-dual, and


can be represented by a 2 2 matrix whose elements are real
which is the Poisson distribution as required. For we quaternions (see [5] for details).
have We now explain the calculation of the spacing distribution
2 32 for this transition. The computation of the previous cases,
D and C 2 (2.17)
Poisson to GOE and Poisson to GUE, can be done in a similar
fashion.
so that Eq. (2.12) becomes

32s 2 4s2 x2 sinh z
P02 (s; ) = e lim dx e 42 x
2
z0 0 z 4
Note that the integral can be expressed in terms of imaginary error
32s 2 2 functions, but for increasing s delicate cancellations occur that make
4s
= e , (2.18) it impractical to use this form for numerical evaluation. This is why
2 we present Eq. (2.12) as the nal formula, which is well suited for
which is the Wigner surmise for the GUE. numerical integration.

061130-4
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

Due to the self-dual structure of H , the spacing S between with


its nondegenerate eigenvalues can be computed analytically 1
and reads c() for 0. (2.31)
 124
S = (a b p/)2 + 4c c , (2.22) For 0, the maximum of the function is at smax =
3.76023 , with a value of P04 (smax ; 0) = 1.43453 (see
where the repeated index indicates a sum from 0 to 3. We Appendix B 2).
have intentionally written S instead of s since we eventually The large-s behavior of P04 (s; ) is again Poisson-like
need to rescale the spacing to ensure s = 1. The desired and given by Eq. (A7).
spacing distribution is proportional to the integral
 3 E. GOE to GUE
I (S) = dp da db dc P0 (p)Pa (a)Pb (b)Pc (c ) With this section we start the investigation of transitions be-
=0
 tween different chaotic ensembles using the smallest possible
(S [a (b + p/)]2 + 4c c ), (2.23) matrix size.
We consider the 2 2 matrix
where we have rescaled S by for simplicity and are not yet
concerned with the normalization. The distributions P () of H = H1 + H2 . (2.32)
the random variables = a,b,c0 ,c1 ,c2 ,c3 are Gaussian, with The spacing distribution for this transition was already
variances given by Eq. (2.2), computed in Ref. [15]. With the normalization of ensembles
2
a,b = 2c20 ,c1 ,c2 ,c3 = 1. (2.24) given in Eq. (2.2), it reads
 
Ds
Inserting this into Eq. (2.23) and shifting b b p/ gives P12 (s; ) = Cs eD s erf
2 2
, (2.33)
 
3
dc ep 2 a 2 (bp/) c c
1 2 1 2
I (S) dp da db with

 
0 =0 1 + 2
 D() = + arccot , (2.34)
(S (a b)2 + 4c c ). (2.25) 1 + 2

C() = 2 1 + 2 D()2 . (2.35)
The multidimensional integral in this expression is computed
in Appendix C 1. Rescaling the spacing and normalizing the This formula matches the result of [15]up to a rescaling of
distribution to satisfy Eq. (1.2), we obtain the coupling parameter by a factor of 2, which is due to a
 different normalization of the ensembles used there.
x2 z cosh z sinh z
P04 (s; ) = Cs 4 eD s dx e 42 x
2 2
, In the limiting cases of 0 and we have
0 z3
(2.26) /2 for 0,
D() (2.36)
2/ for .
with z = xDs/ and
 For 0, the error function in Eq. (2.33) can be replaced by

D() = dx e2x unity (for s > 0), and we obtain the Wigner surmise for the
2 0 GOE. For , using the rst-order Taylor expansion of

(4x 3 + 2x)ex +
2
(4x 4 + 4x 2 1) erf(x) the error function yields the Wigner surmise for the GUE.
, The result (2.33) is plotted in Fig. 2 (left). In the small-s
x3
(2.27) region, we now have for > 0
8D()5 P12 (s; ) = c()s 2 + O(s 4 ), (2.37)
C() = , (2.28)
with
where erf is the error function [[35], Eq. (7.1.1)]. The last
c() for 0. (2.38)
2
term in the integrand of Eq. (2.26) is proportional to I3/2 (z),
Similar to the previous sections, a nonanalytic transition
in agreement with Eqs. (A2) and (A3).
between weaker and stronger level repulsion develops as
In the limiting cases of 0 and we nd
0, except that now there is no jump in the function itself
1/(2) for 0, but rather in its derivative at s = 0. Therefore, the stronger
D() (2.29)
8/(3 ) for . level repulsion takes over immediately in the small-s regime,
if > 0. As we shall see below, this also happens in the
For 0, manipulations analogous to those performed in remaining transitions, GOE to GSE and GUE to GSE, and
Eq. (2.16) lead to the Poisson result es . For the seems to be a characteristic feature of the mixed ensembles.
integral in Eq. (2.26) becomes trivial and yields 1/3 so that we The large-s behavior of P12 (s; ) is obtained immediately
obtain the Wigner surmise (64/9 )3 s 4 e64s /9 for the GSE.
2
from Eq. (2.33) by noticing that erf(x) 1 for x . In
Equation (2.26) is plotted in Fig. 1 (right) and again displays analogy to the transitions from Poisson to RMT this implies
a discontinuity at s = 0 as 0. For > 0 we now have that the large-s behavior is dominated by the ensemble with
P04 (s; ) = c()s 4 + O(s 6 ), (2.30) the smaller .

061130-5
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

1 1.4 1.4
GOE GOE GUE
GUE 1.2 GSE 1.2 GSE
0.8 interpolations interpolations interpolations
1 1
0.6 0.8 0.8
P (s)

P (s)

P (s)
0.4 0.6 0.6

0.4 0.4
0.2
0.2 0.2

0 0 0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
s s s

FIG. 2. (Color online) Spacing distributions P  (s; ) of the transitions GOE GUE [left, Eq. (2.33)], GOE GSE [middle, Eq. (2.41)],
and GUE GSE [right, Eq. (2.52)] for 2 2 or 4 4 matrices with coupling parameters = 0.1, 0.2, 0.4, and 0.8 (maxima increasing with
). In the cases involving the GSE, the GOE or GUE matrices were made self-dual. All formulas were veried by comparison to numerically
obtained spacing distributions of 2 2 or 4 4 random matrices.

F. GOE to GSE and the difference of the Bessel functions in the integral over x
As the GSE is involved in this transition, we need matrices can be replaced by unity, and the Wigner surmise for the GSE
of size 4 4. Again there are two possibilities: The GOE follows trivially.
matrix could be made self-dual, or it could be non-self-dual The distribution P14 (s; ) is plotted for several values of
(as it generically is). Here we only consider the former case, in Fig. 2 (middle) and displays a continuous interpolation
while the latter case will be discussed in Sec. III E. As in between the GOE and GSE curves. In the small-s region, the
Ref. [29] we dene a modied GOE matrix by level repulsion is of fourth order for nonvanishing . This is
visible in the plots and can be shown by expanding P14 (s; )
a 0 c 0 for > 0 and small s,
0 a 0 c
P14 (s; ) = c()s 4 + O(s 6 ), (2.45)
H1 12 = , (2.39)
c 0 b 0
with
0 c 0 b
2
with real parameters a, b, and c. This matrix is self-dual, so c() for 0. (2.46)
123
we can add it to a matrix from the GSE without spoiling the
symmetry properties of the latter. Thus we consider The large-s behavior of P14 (s; ) can be obtained using
the asymptotic expansion
H = H1 12 + H4 , (2.40)  
1 5/2
where H1 and H4 are normalized according to Eq. (2.2). The I0 (z) I1 (z) = e
z
+ O(z ) (2.47)
8 z3/2
eigenvalues of the sum are doubly degenerate and can be
calculated easily due to self-duality. in Eq. (2.41), resulting in

After some algebra (see Appendix C 2) we obtain for the C
s e2(Ds) for s . (2.48)
2
spacing distribution of H , P14 (s; )
32 D 3
 1
Again, the large-s behavior is dominated by the ensemble with
P14 (s; ) = Cs 4 e(1+2 )D s
2 2 2 2
dx (1 x 2 ) e(xDs)
0 the smaller .
[I0 (z) I1 (z)], (2.41)
G. GUE to GSE
where z = (1 x 2 )D 2 s 2 , I0 and I1 are modied Bessel
Again, due to the presence of the GSE, we have two
functions, and possibilities for the GUE: self-dual or not. The former case
3 + (1 + 2 )2 arccot is simpler and analyzed here, while the latter case will be
D() = , (2.42) considered in Sec. II H. We rst have to clarify how to obtain
2 1 + 2
a self-dual 4 4 matrix whose eigenvalues have the same
29/2
C() = 2 (1 + 2 )3/2 D()5 . (2.43) probability distribution as those of a 2 2 matrix from the
GUE. In analogy to Sec. II F, one could try H2 12 , but
In the limiting cases of 0 and we have the resulting matrix is not self-dual. Instead, we consider the

/(23/2 ) for 0, matrix
D() (2.44)  
8/(3 2 ) for . H2 0
H2 =
4
(2.49)
0 H2T
For 0, we use the asymptotic expansion of the Bessel
functions to simplify the integral over x in Eq. (2.41) and obtain with H2 given in Eq. (2.11). The eigenvalues of H24 are
the Wigner surmise for the GOE. For , the exponential obviously equal to those of H2 , but they are twofold degenerate.

061130-6
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

Interchanging the second and third row and column of H24 , we H4 H +H


4 2
obtain the matrix

a 0 c0 + ic1 0
0 c0 ic1
a 0
H2sd = , (2.50)
c0 ic1 0 b 0
0 c0 + ic1 0 b
which is self-dual and has the same eigenvalues as H24 . A S S
2
matrix of this form was already introduced in Ref. [29].
The proper self-dual matrix for the GUE to GSE transition
is thus
H = H2sd + H4 , (2.51) S1
with H4 given in Eq. (2.21). The calculation of the correspond-
ing spacing distribution proceeds in close analogy with the one FIG. 3. (Color online) Perturbation of GSE eigenvalues removing
presented in Appendix C 2, and we nd the closed expression the degeneracy.

P24 (s; ) = Ce(Ds) [2(Ds)2 Dse(Ds) er(Ds)],
2 2

the GSE and another ensemble without self-dual symmetry.


(2.52) We will return to this point in Sec. III E.
with the imaginary error function er(x) = i erf(ix) and
  1. General considerations
1 4 arccsch
D() = 2+
2
, (2.53)
1 + 2 The 4 4 transition matrix is
23 H = H4 + H2 ,
C() = (1 + 2 )D(), (2.54) (2.59)

with H4 taken from the GSE and H2 from the GUE, both
where arccsch is dened in Ref. [[35], Eq. (4.6.17)].
in standard normalization, Eq. (2.2). As H2 has no self-dual
In the limiting cases of 0 and we have
symmetry, the twofold degeneracy of the GSE spectrum is

2/( ) for 0, removed and eigenvalue pairs are split up. If the perturbation
D() (2.55) is small, there are two different spacing scales in this setup, as
8/(3 ) for .
shown in Fig. 3 where the perturbation of two nearest-neighbor
For 0, the asymptotic expansion of the second term in eigenvalues is sketched:
the square brackets of Eq. (2.52) yields 1. This can be S1 : the spacings between previously degenerate eigenval-
neglected compared to the rst term in the square brackets, ues, which are of the same order of magnitude as the coupling
which gives the Wigner surmise for the GUE. For , parameter for small couplings; these are formed by the two
Taylor expansion of the square brackets in Eq. (2.52) yields smallest and the two largest eigenvalues of H ;
the Wigner surmise for the GSE. S2 : the intermediate spacing, which is formed by the second
The result (2.52) is plotted in Fig. 2 (right). In the small-s and third largest eigenvalue of H ; in the limit 0 this is
region, we have for = 0 the original spacing of the GSE matrix H4 .
P24 (s; ) = c()s 4 + O(s 6 ), (2.56) The joint probability density of the eigenvalues of H is
given, up to a rescaling, by [5, Eq. (14.2.7)]
with
256 P (1 ,2 ,3 ,4 )
c() for 0. (2.57)  
3 3 2 
4
= C0 exp i2 (1 ,2 ,3 ,4 )
The large-s behavior of P24 (s; ) can be obtained by i=1
noticing that for large s the rst term in the square brackets of [h(d21 )h(d43 ) + h(d32 )h(d41 ) h(d31 )h(d42 )], (2.60)
Eq. (2.52) dominates the second term so that
with
P24 (s; ) 2CD 2 s 2 e(Ds)
2
for s . (2.58) 
Again, the large-s behavior is dominated by the ensemble with (1 ,2 ,3 ,4 ) = (j i ), (2.61)
the smaller . i<j

h(x) = xex
2
/2
, (2.62)
H. GSE to GUE without self-dual symmetry
In this section, we consider a matrix taken from the GSE dij = i j , (2.63)
whose Kramers degeneracy is lifted by a perturbation taken 1 6
from the GUE without self-dual symmetry. As we shall see, C0 = (2 + 2 )5 . (2.64)
9 2
this case also gives a surmise for other transitions involving

061130-7
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

1 0.94
2x2 GUE 1.2 GSE
4x4 GUE (s1) 4x4 GUE (s2)
0.8 1
interpolations 0.93 interpolations
0.6 0.8
P(s )

P(s )
P(s1)
1

2
0.92
0.6
0.4
0.4
0.91 2x2 GUE
0.2
4x4 GUE (s1) 0.2
interpolations
0 0.9 0
0 0.5 1 1.5 2 2.5 0.9 0.95 1 0 0.5 1 1.5 2 2.5
s s1 s2
1

FIG. 4. (Color online) Spacing distributions for the transition from GSE GUE (non-self-dual) for 4 4 matrices and various values of
the coupling parameter . Left: Spacings s1 between previously degenerate eigenvalues. Middle: Spacings s1 , but zoomed in to the rectangular
region indicated in the plot on the left; 2 2 GUE stands for 0; 4 4 GUE stands for ; for the interpolation curves we chose
= 0.4,1,2 (maxima decreasing). Right: Intermediate spacing s2 ; GSE stands for = 0; 4 4 GUE stands for ; for the interpolation
curves we chose = 0.05,0.15,0.3,1 (maxima decreasing).

As we are only interested in spacings and thus in differences evaluating the spacing distribution in the limit 0. First
of eigenvalues, we introduce new variables note that
t1 = d21 = 2 1 , (2.65) 2
lim 3 x h(x) = (x), (2.72)
t2 = d32 = 3 2 , (2.66) 0
t3 = d43 = 4 3 (2.67) where the dependence of h, which is suppressed in our
and keep the original variable 1 . The Jacobi determinant notation, plays a crucial role. As the mean value of the spacing
of this transformation is 1, and we can now perform the 1 S1 on the original scale has to become arbitrarily small in
integration, which results (up to a constant factor) in the GSE limit due to the Kramers degeneracy, we consider
a rescaled spacing s1 = S1 /. Therefore h(S1 ) becomes for
P (t1 ,t2 ,t3 ) = (t1 ,0,t2 ,t2 + t3 ) small
 !
exp 14 (t1 + 2t2 + t3 )2 + 2t12 + 2t32 0
h(S1 ) = h(s1 ) s1 es1 .
2
 (2.73)
h(t1 )h(t3 ) h(t1 +t2 )h(t2 +t3 )
With these considerations we obtain from Eq. (2.68)
+ h(t1 +t2 +t3 )h(t2 ) . (2.68)
P (s1 ,t2 ,t3 )
We now derive the distributions of the two different kinds
e 4 [(s1 +2t2 +t3 ) +2 s1 +2t3 ]
1 2 2 2 2
of spacings from this formula. We assume 1  2  3  4  2
and include the resulting combinatorial factor of 4! explicitly. 2s
1 es1 (t3 )(s1 + t2 )(s1 + t2 + t3 )t2 (t2 + t3 )
2


2. Spacings between originally degenerate eigenvalues (s1 + t2 ) (t2 + t3 ) s1 t2 t3 (s1 + t2 + t3 )
To obtain the distribution of the spacing between the two 
smallest eigenvalues of H (the two largest ones give the same + (s1 + t2 + t3 ) (t2 ) s1 (s1 + t2 )(t2 + t3 )t3 (2.74)
result due to symmetry), we set t1 = S1 and integrate over t2
and t3 from 0 to . This results in the spacing distribution as 0. The last two terms in square brackets vanish upon
 evaluation of the t2 and t3 integrals, because the zeros of the
P42 (s1 ; ) = CD
1
dt2 dt3 P (Ds1 ,t2 ,t3 ), (2.69) arguments of their functions lie outside of the integration
0 region. Performing the t3 integration in the rst term we obtain
with for nonzero and s1
4 1 0
(s1 ; ) s12 es1 .
2
C() = 3/2 6 (2 + 2 )5 , (2.70) P42 (2.75)
3
 Up to normalization and rescaling this is the spacing distribu-
D() = C() dS1 dt2 dt3 S1 P (S1 ,t2 ,t3 ). (2.71) tion of a 2 2 GUE matrix.
0
In the opposite limit the result (2.69) reduces to
We replaced S1 by s1 to indicate that this is the spacing on the the distribution of the rst and last spacings of a pure 4 4
unfolded scale, i.e., with a mean value of 1. One of the integrals GUE matrix. This distribution can be obtained from similar
could in principle be done analytically, but this results in such considerations, starting from [5, Eq. (3.3.7)].
a lengthy expression that it seems more sensible to evaluate all The result (2.69) is shown in Fig. 4 (left and middle) for
integrals numerically. several values of , along with the limiting distributions for
The distribution in the limit 0 can either be obtained 0 and . All these curves are very similar and can
by perturbation theory (see Appendix D 1) or by directly only be distinguished by the naked eye in the zoomed-in plot.

061130-8
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

We have validated the result (2.69) by comparing it to the density of H0 is thus 0 ( ) = N P( ), and the local mean level
spacing distribution of numerically obtained 4 4 random spacing is 1/0 ( ). We consider
matrices.
H = H0 + H2 , (3.1)
3. Perturbed GSE spacing where H2 is an N N random matrix taken from the GUE,
We now consider the perturbed spacing of the original GSE subject to the usual normalization, Eq. (2.2).
matrix, which was formed by the two degenerate eigenvalue As in the 2 2 case, the eigenvalues i will experience a
pairs of H4 . The distribution of this spacing is obtained by repulsion through H2 . We will show in rst-order perturbation
setting t2 = S2 and integrating P dened in Eq. (2.68) over theory that the relevant quantity for the repulsion is a
t1 and t3 from 0 to . With proper normalization as given in combination of the eigenvalue density of H0 and the variance
Eq. (2.2), this yields of the matrix elements of H2 .
 Ordinary perturbation theory in yields a rst-order
P42 (s2 ; ) = CD
2
dt1 dt3 P (t1 ,Ds2 ,t3 ), (2.76) eigenvalue shift of the i of
0
i(1) = (H2 )ii . (3.2)
with
4 This shift does not lead to a correlation of the eigenvalues, as
C() = 3/2 6 (2 + 2 )5 , (2.77) it just adds an independent Gaussian random number to each
3
 of them. Therefore, the eigenvalues remain uncorrelated, and
D() = C() dS2 dt1 dt3 S2 P (t1 ,S2 ,t3 ). (2.78) their spacing distribution remains Poissonian.
0 However, if there is a small spacing of order between
Again, the replacement of S2 by s2 means that this is the two5 adjacent eigenvalues k and of H0 , rst-order almost-
intermediate spacing on the unfolded scale, i.e., with a mean degenerate perturbation theory [40] predicts that the perturbed
value of 1. eigenvalues are the eigenvalues of the matrix
In the limit 0 the result (2.76) reduces to the Wigner    
k 0 (H2 )kk (H2 )k
surmise for the GSE, while in the opposite limit it + . (3.3)
0 (H2 ) k (H2 )
reduces to the spacing distribution of the intermediate spacing
of a pure 4 4 GUE matrix, which can again be obtained from This matrix is almost identical to the 2 2 matrix consid-
similar considerations. ered in Sec. II C, Eq. (2.11), with two differences: (i) The
The result (2.76) is shown in Fig. 4 (right) for several values unperturbed eigenvalues k and l are shifted, but this does
of , along with the limiting distributions for 0 and not affect the spacing distribution. (ii) The mean spacing
. The maximum of the interpolation rst drops down as is of the unperturbed eigenvalues is not 1, but 1/0 ( ). We
increased from 0, while at a value of around 1 it starts to rise dropped the subscript on the eigenvalue here, because
again as the distribution approaches its limit. Note adjacent eigenvalues are very close for large N , and therefore
that the limiting distributions of s1 and s2 for , i.e., the 0 (k ) 0 ( ) = 0 ( ).
red dashed curves in Fig. 4, turn out to be almost identical to To be able to match to the 2 2 formulas, we have to correct
each other and to the Wigner surmise for the GUE. for the different mean spacing of the unperturbed matrix. We
We have also validated the result (2.76) by comparing it to can do this by multiplying the matrix in Eq. (3.3) by 0 ( )
the spacing distribution of numerically obtained 4 4 random without affecting the normalized spacing distribution. This
matrices. results in the relation
( ) = 0 ( ) (3.4)
III. APPLICATION TO LARGE SPECTRA
between the coupling parameters of the 2 2 case and the
In this section we will show numerically that the formulas N N cases. Note that the 2 2 parameter has acquired
derived in Sec. II for small matrices describe the spacing distri- a dependence on the eigenvalue of H through the local
butions of large random matrices very well. This observation eigenvalue density of H0 . To be able to describe the spacing
should be viewed as our main result. distribution of H in the spectral region around by the
When comparing the results obtained from large matrices to generalized Wigner surmise, we assume that we have to insert
our generalized Wigner surmises, a natural question is how the this ( ) into the 2 2 formulas. This choice of universal
corresponding coupling parameters, i.e., in Eq. (1.4), should coupling parameter is in line with an unfolded coupling
be matched. This question will be addressed in the next section parameter mentioned in Refs. [32,41] and a similar result from
based on perturbation theory, while the numerical results will perturbation theory [42]. Appendix E contains a calculation
be presented in the remaining portions of Sec. III. for large matrices in second-order perturbation theory, also
showing that the strength of the perturbation to be used in the
A. Matching of the coupling parameters generalized Wigner surmise only depends on the combination
The setup is most easily explained by means of the transition 0 ( ) .
from Poisson to the GUE. The Poisson case is represented by
a diagonal N N matrix H0 with independent entries i (i =
1, . . . ,N), each distributed according to the same distribution 5
For small , we are unlikely to nd three or more small (i.e., of
P( ), which we choose independent of N . The eigenvalue order ) consecutive spacings.

061130-9
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

We now turn from the example Poisson to GUE to the taken from the GSE. We choose a Gaussian for the distribution
general case, which we write as of the eigenvalues
of H 0 , i.e., P( ) = (1/ 2 ) exp( 2 /2),
so 0 (0) = N/ 2 . From Eq. (3.8) we would then expect the
H = H + H  . (3.5)
spacing distribution in the center of the spectrum around = 0
The same considerations hold with two modications: (i) The to be approximated by the corresponding 2 2 formulas (2.4),
unperturbed matrix is not necessarily diagonal by construction. (2.12), and (2.26) with coupling = .
However, it can be diagonalized by a transformation that As can be seen in Fig. 5, the formulas for the 2 2
can be absorbed in the perturbation.6 We can therefore treat matrices indeed describe the spectra of large matrices quite
it as diagonal (with eigenvalues correlated as dictated by well in a wide range of the coupling parameter . The spacing
the unperturbed ensemble). (ii) The mean spacing s of the distribution was evaluated in the center of the spectrum,
unperturbed 2 2 (4 4) matrix from Sec. II is dened as the interval (0.2,0.2), because the eigenvalue
density is almost constant and equal to 0 (0) in this region
s0 = 1(Poisson), s1 = (GOE), so that no unfolding is needed. The analytical curve was
(3.6)
4 16 obtained by a t (see Appendix F for details) of the 2 2
s2 = (GUE), s4 = (GSE). (or 4 4) formula to the numerical data with t parameter .
3
As expected by the perturbative considerations, comes out
Therefore, we now have to multiply Eq. (3.5) by s ( ) to get on the same order of magnitude as , and it almost matches
the correct mean spacing s for the unperturbed matrix. This for small . However, is considerably smaller than  for
results in a universal, but -dependent, coupling parameter stronger couplings. Presumably, the repulsion of the many
( ) = s ( ) (3.7) other eigenvalues in the spectrum not present in the smallest
matrices has a squeezing effect on the spacing, which works
with the eigenvalue density ( ) of the unperturbed matrix. against the repulsion caused by the perturbation. This would
Equation (3.7) holds for all the transitions we consider, and in explain the smaller coupling parameter.
each case is the Dyson index of the unperturbed ensemble.
In turn, this perturbative argument provides us with a 2. Dependence of coupling parameter on eigenvalue density
formula of how to choose the coupling in large matrices
in order to approximate the spacing distribution of H by 2 2 The considerations in Sec. III A imply a linear relation
(4 4) formulas with parameter , i.e., between local eigenvalue density and effective coupling
parameter, Eq. (3.7), for matrices of the form given in Eq. (3.1).
This means that a perturbation should have a different impact
= , (3.8)
( )s on the spacing distribution of a single matrix in different
where ( ) is the eigenvalue density in the spectral region we regions of its spectrum (as qualitatively observed in Ref. [15]).
wish to study. In this way we can choose a value of resulting This section provides a detailed analysis of this phenomenon.
in a spacing distribution roughly in the middle of the two Again, we consider a diagonal Poissonian matrix H0 of
limiting cases. Choosing in Eq. (3.5) without this guidance large dimension perturbed by a matrix taken from one of the
is likely to result in a spacing distribution that is dominated by Gaussian ensembles H  ,
one of the limiting cases. H = H0 + H  . (3.10)

B. Transitions from integrable to chaotic This time we will choose some xed and look separately at
different parts of the spectrum of H with a varying eigenvalue
1. Check of Wigner surmise
density. According to Eq. (3.7) the effective 2 2 (4 4)
We rst consider transitions from Poisson to RMT for coupling parameter should be the product of and the
matrices with N nondegenerate eigenvalues. The explicit local eigenvalue density of H0 . In Appendix E we show in
numerical realization is the Hamiltonian perturbation theory up to second order that the local coupling
 parameter is in fact a function of this product.
H = H0 + H  , (3.9) To treat such a system numerically one has to construct
0 (0)
a Poissonian ensemble with a varying eigenvalue density,
where H  is a matrix taken from one of the Gaussian perturb it, and measure the coupling parameter in different
ensembles, with normalization as given in Eq. (2.2). H0 is parts of the spectrum. This is done by cutting the spectrum
the same matrix as in Eq. (3.1) for the perturbation H  in into small windows with approximately constant eigenvalue
GOE or GUE, whereas a self-dual H0 is constructed by a density and tting (see Appendix F for details) the spacing
direct product with 12 as in Sec. II D if the perturbation is distributions inside the windows to the formulas for the
2 2 (4 4) matrices. We therefore obtain a tted coupling
parameter for each window.
6
Note that we choose the perturbations H  such that their For the numerical calculations, the eigenvalues i of the
probability distribution is always invariant under the transformations matrix H0 were distributed in the interval (N/2,N/2)
that diagonalize H , just like in the Poisson to RMT cases. However, according to the somewhat arbitrarily chosen distribution
this does not work for some of the transitions between the GSE and   2  3 
ensembles without self-dual symmetry, which we discuss separately 1 1 i i
P(i ) = +6 +8 , (3.11)
in Sec. III E. N 2 N N

061130-10
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

= 0.05 = 0.1 = 0.2 =1


1 1
= 0.047 = 0.09 = 0.167 = 0.629
0.8 2 = 0.013 2 = 0.011 2 = 0.011 2 = 0.011 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s
1.2 1.2
= 0.05 = 0.1 = 0.2 =1
1 1
= 0.046 = 0.091 = 0.173 = 0.728
0.8 2 = 0.031 2 = 0.013 2 = 0.017 2 = 0.013 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

1.2 = 0.05 = 0.1 = 0.2 =1 1.2


1 = 0.048 = 0.092 = 0.178 = 0.79 1
= 0.013 = 0.019 = 0.023 = 0.021
2 2 2 2
0.8 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

FIG. 5. (Color online) Spacing distributions for the transition of large matrices from Poisson to GOE (top), GUE (middle), and GSE
(bottom) with several values of the coupling  in Eq. (3.9). The histograms show the numerical data, while the full curves are the analytical
results for 2 2 (4 4) matrices with tted coupling parameter ; see Secs. II B through II D. The quantity 2 dened in Appendix F is a
measure of the t quality, which is small for a good t. Each plot has been obtained by diagonalizing 50 000 matrices with 400 nondegenerate
eigenvalues.

N being the number of independent eigenvalues of H0 . The As can be seen, the linear dependence of the effective
matrix H  is normalized in the usual way, Eq. (2.2). coupling parameter on the eigenvalue density is conrmed
The eigenvalue density ( ) of the total matrix H is plotted very well by the numerical data for all the transitions. Note
along with the analytical 0 ( ) = N P( ) of H0 in the top row that the t quality gets better with increasing Dyson index  ;
of Fig. 6. One can see that the perturbation only has a negligible i.e., it is worst for the GOE and best for the GSE. This is
effect on the spectral density. most likely explained by the fact that the spacing distributions
The dependence of the coupling parameter on the eigen- change more rapidly with respect to the coupling parameter
value density is plotted in the bottom row of Fig. 6 for for larger  (cf. Fig. 1), which allows for a more precise
= 0.1. No error bars are shown because the statistical errors measurement of the coupling.
are negligibly small. A linear t through the origin with Although the linear dependence of the effective coupling
minimized squared deviation was performed to obtain the on the eigenvalue density has been demonstrated beyond
proportionality factor between the eigenvalue density and the reasonable doubt, the proportionality factor is less clear. As can
coupling parameter. The quantity 2 shown in the plots is a be read off from Fig. 6 the proportionality factor is smaller than
measure of the t quality and dened by ; i.e., the measured coupling parameter is smaller than the
" expected one. This agrees with the observation in Sec. III B1
# N
# (i i )2 %  N
j where an explanation was given in terms of the effect of other
2 = $ , (3.12) eigenvalues.
i=1
N j =1
N

where the i are the numerically obtained coupling parameters C. Transitions from one symmetry class to another
for each spectral window and the i are the corresponding
predictions from the linear t at the given eigenvalue density. 1. Check of Wigner surmise
Because 2 is a monotonically increasing function of the We now consider chaotic systems composed of different
squared deviation it is also minimized by our tting procedure. symmetry classes, the latter represented by pure Gaussian

061130-11
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

3 3 3
histogram: numerical data histogram: numerical data histogram: numerical data
2.5 initial density 2.5 initial density 2.5 initial density
0 0 0

2 = 0.1 2 = 0.1 2 = 0.1


()

()

()
1.5 1.5 1.5

1 1 1

0.5 0.5 0.5

0 0 0
200 0 200 200 0 200 200 100 0 100 200

0.25 0.25 0.25
= 0.1 = 0.1 = 0.1
0.2 = 0.081(1) 0.2 = 0.083(1) 0.2 = 0.084(1)
= 0.039 = 0.017 = 0.009
2 2 2
0.15 0.15 0.15
()

()

()
0.1 0.1 0.1

0.05 0.05 0.05

0 0 0
0 1 2 3 0 1 2 3 0 1 2 3

FIG. 6. (Color online) Transitions Poisson GOE (left), Poisson GUE (middle), and Poisson GSE (right). Top: Unperturbed and
perturbed eigenvalue density, the latter obtained numerically. Bottom: Effective coupling obtained from ts of the 2 2 (4 4) level spacings
P0  (s) [see Eqs. (2.4), (2.12), and (2.26)] as a function of the local eigenvalue density in 35 equally large windows of the spectrum. Linear
t and proportionality factor with errors dened by the 95% condence interval are given in the plots. The quantity 2 dened in Eq. (3.12) is a
measure of the t quality, which is small for a good t. The numerical data were obtained from 105 random matrices of dimension 600 (GOE
and GUE) or 800 (GSE).

ensembles. If the GSE is involved, we consider the case of the transition is almost completed in this case and that the
a self-dual perturbed ensemble in this section (see Sec. III D spacing distribution is already very similar to the one of the
for the case of a non-self-dual perturbed ensemble). A self-dual perturbing ensemble. What is relevant for the transition is
GOE can be constructed by taking the direct product with 12 not the relative magnitude of the matrix elements (which
as in Sec. II F, while the self-dual GUE is more involved (see depends on N through the local eigenvalue density) but the
Appendix G). All ensembles are normalized as in Eq. (2.2). rescaled coupling parameter ; i.e., the transition occurs for
Again motivated by Eq. (3.8), we consider the Hamiltonian  = O(1). The same phenomenon was found for the two-point
function [41], which is related to the spacing distribution for

H = H + H  . (3.13) small s.
(0)s
For large matrix size, the eigenvalue
density of H is a 2. Dependence of coupling parameter on eigenvalue density
semicircle which extends to r = 2N , and its eigenvalue
density in the center is We now consider the dependence of the coupling parameter
on the local eigenvalue density as in Sec. III B2, but now
2N for transitions between Gaussian ensembles. In these cases,
(0) = . (3.14) the tting procedure of the effective coupling becomes less

precise, because the functions of the spacing distributions
The results for the three transitions among the Gaussian change only very slowly with , as can be seen in Fig. 2.
ensembles are shown in Fig. 7 for N = 400. Again, only Therefore, we restrict ourselves to the case of a self-dual GOE
the center of the spectrum, dened as the interval (5,5), matrix H1 that is perturbed by a GSE matrix H4 as the level
was evaluated. (The whole semicircle extends to about 28 repulsion differs the most in these two ensembles.
for H GOE and about 40 for H GUE.) The coupling In Fig. 8 we show results from the mixed matrix
parameter was obtained by a t (see Appendix F for details)
to the corresponding 2 2 (4 4) formula, which yields a
H = H1 + H4 , = , (3.15)
good approximation to the numerical data throughout the 1 (0)s1
transition in each case. As in Sec. III B1, is close to 
as expected. with = 0.2. (For details about the self-dual GOE and the
For  = 1 and N = 400 the mixed matrix is roughly given normalization, see Sec. III C1.) According to Eq. (3.8) the
by H = H + O(101 )H  . From the values given in Fig. 7, effective 4 4 coupling parameter should be ( )s1 =
which should be compared to those in Fig. 2, we see that ( )/1 (0), i.e., proportional to the local eigenvalue density

061130-12
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

= 0.05 = 0.2 = 0.3 =1


0.8 = 0.047 = 0.182 = 0.276 = 0.959 0.8
= 0.014 = 0.015 = 0.015 = 0.004
2 2 2 2
0.6 0.6
P (s)

0.4 0.4

0.2 0.2

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

1.2 = 0.05 = 0.2 = 0.3 =1 1.2


1 = 0.048 = 0.193 = 0.296 = 1.231 1
2 = 0.015 2 = 0.018 2 = 0.019 2 = 0.002
0.8 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

1.2 = 0.05 = 0.2 = 0.3 =1 1.2


1 = 0.033 = 0.196 = 0.3 = 1.248 1
= 0.004 = 0.005 = 0.005 = 0.003
2 2 2 2
0.8 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

FIG. 7. (Color online) Spacing distributions for the transition of large matrices: GOE GUE (top), GOE GSE (middle), and
GUE GSE (bottom), with several values of the coupling  in Eq. (3.13). The histograms show the numerical data, while the full curves
are the analytical results for 2 2 (4 4) matrices with tted coupling parameter (see Secs. II E through II G). The quantity 2 dened in
Appendix F is a measure of the t quality, which is small for a good t. Each plot has been obtained by diagonalizing 50 000 matrices with 400
nondegenerate eigenvalues.

normalized by the density 1 (0) in the center, with the parameter  on the local density (and again, the perturbation
proportionality factor given by the input parameter . As one has no measurable effect on the eigenvalue density). The
can see, there is again a linear dependence of the tted coupling proportionality factor is almost compatible with the expected
value .

histogram: numerical data


= 0.2
initial density 0.2 D. Perturbation of a GSE matrix by a non-self-dual
10
1 = 0.194(1) /1(0)
= 0.2 0.15 = 0.004 GUE matrix
2
()
()

0.1
In this section, we apply the formulas derived in Sec. II H
5 for the spacing distributions of a 4 4 matrix from the GSE
0.05 perturbed by a matrix from the GUE, this time without self-
0 0 dual symmetry, to large matrices. We consider a 2N 2N
20 0 20 0 0.5 1

/ (0)
1
matrix

FIG. 8. (Color online) Transition GOE GSE. Left: Un-



H = H4 + H2 , (3.16)
perturbed eigenvalue density (approximated by a semicircle) and 4 (0)s4
perturbed eigenvalue density. Right: Effective coupling obtained
from ts of the 4 4 level spacings P14 (s) [see Eq. (2.41)], as a where H4 is taken from the GSE and H2 is the perturbation
function of the local eigenvalue density in 35 equally large windows in from the GUE. Both H4 and H2 are normalized in the usual
the spectrum. Linear t and proportionality factor with errors dened way [see Eq. (2.2), and for the prefactor of H2 see Sec. III A].
by the 95% condence interval are given in the plots. The quantity To ensure a constant eigenvalue density, we again restrict
2 dened in Eq. (3.12) is a measure of the t quality, which is small the measurements to the center of the spectrum, dened
for a good t. The numerical data were obtained from 105 random by the interval (5,5). The numerically obtained spacing
matrices of dimension 800. distributions were rescaled to a mean value of 1.

061130-13
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

= 0.1 = 0.3 = 0.6 =2


0.8 = 0.004 = 0.004 = 0.004 = 0.007 0.8
2 2 2 2

0.6 0.6
P (s)

0.4 0.4

0.2 0.2

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s
1.2 = 0.2 = 0.4 = 0.6 =1 1.2
1 = 0.197 = 0.41 = 0.674 = 1.181 1
= 0.003 2 = 0.003 2 = 0.003 2 = 0.004
0.8 2 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

FIG. 9. (Color online) Spacing distributions between previously degenerate eigenvalues s1 (top) and previously nondegenerate eigenvalues
s2 (bottom) for the transition GSE GUE without self-dual symmetry for various values of the coupling parameter  in Eq. (3.16). The
histograms show the numerical data, while the full curves are the 2 2 GUE surmise P2 (top) and the surmise P422
(s2 ; ) given in Eq. (2.76)
(bottom), the latter with tted coupling parameter . The quantity 2 dened in Appendix F is a measure of the t quality, which is small for a
good t. The numerical data were obtained by diagonalizing 50 000 random matrices of dimension 400 for each plot.

As in Sec. II H, we will separately consider the spacings the spectrum (near zero). For small , we show in Appendix A
between originally degenerate eigenvalues and the remaining in rst-order perturbation theory that the perturbation by the
ones. The distributions of the former were obtained by GOE has exactly the same effect on the eigenvalues as the
measuring every second spacing, starting with the rst one perturbation by the GUE considered in Sec. II H, modulo a
of each random matrix. They are plotted in Fig. 9 (top) rescaling of the coupling parameter, i.e.,
and show perfect agreement with the 2 2 GUE surmise,
which is practically indistinguishable from the exact result
1
P41 (s1 ; ) = P42 1
(s1 ; / 2)  P2 (s1 ), (3.18)
derived for 4 4 matrices, P42 1
(s1 ; ) given in Eq. (2.69).
As in Sec. II H2, this distribution is almost independent of the
2
P41 (s2 ; ) = P42
2
(s2 ; / 2). (3.19)
coupling parameter.
The distribution of the spacings between previously non-
degenerate eigenvalues is shown in Fig. 9 (bottom). Again, Therefore, we rst expect a transition from the GSE to the
every second spacing was measured, but starting with the GUE, corresponding to the breaking of the self-dual symmetry.
second one this time. We get an almost perfect agreement This expectation is conrmed in Fig. 10 (top and middle).
2
of the numerical data with the surmise P4  (s2 ; ) dened As  is increased to very large values, a transition to GOE
in Eq. (2.76) throughout the transition. The parameter was behavior must eventually occur. The question is whether this
again determined by a t (see Appendix F) and approximately transition is described by the surmise of Sec. II E. We show in
matches the perturbative prediction from Sec. III A. Fig. 10 (bottom) that this is indeed the case. Note that a rising
 amounts to a shrinking tted coupling parameter because
E. Other transitions between the GSE and ensembles
the direction of the transition is turned around compared to
without self-dual symmetry
Sec. II E. Here,  means that H is a pure GOE matrix,
which is described by the surmise with = 0.
Let us now consider the transition from the GSE to either The case of GSE to Poisson without self-dual symmetry
the GOE or Poisson, both without self-dual symmetry. These is analogous. For small values of the coupling parameter, the
two cases are more complicated than the cases discussed so self-dual symmetry of the GSE is broken by the perturbation so
far because, as we shall discuss now, the transitions proceed that we expect a GSE to GUE transition for the spacings s1 and
via an intermediate transition to the GUE. s2 as in the GSE to GOE case considered above. For very large
Let us rst focus on the case values of the coupling parameter we should eventually nd
 a transition to Poisson behavior, described by the surmise of
H = H4 + H1 , (3.17) Sec. II C. We have conrmed these expectations numerically
4 (0)s4
but do not show the corresponding plots here.
where H4 is from the GSE, H1 is from the GOE without self- Note that in the transitions considered in Secs. III B through
dual symmetry, and we again concentrate on the central part of III D a single antiunitary symmetry (or integrability in the

061130-14
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

= 0.1 = 0.3 = 0.6 =2


0.8 = 0.004 = 0.004 = 0.003 = 0.008 0.8
2 2 2 2

0.6 0.6
P (s)

0.4 0.4

0.2 0.2

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s
1.2 = 0.2 = 0.4 = 0.6 =1 1.2
1 = 0.197 = 0.405 = 0.621 = 1.141 1
= 0.003 2 = 0.004 2 = 0.003 2 = 0.005
0.8 2 0.8
P (s)

0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s
= 200 = 500 = 1000 = 2000
0.8 = 0.723 = 0.28 = 0.138 = 0.068 0.8
2 = 0.006 2 = 0.015 2 = 0.015 2 = 0.015
0.6 0.6
P (s)

0.4 0.4

0.2 0.2

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
s s s s

FIG. 10. (Color online) Spacing distributions for the transition GSE GOE without self-dual symmetry for various values of the coupling
parameter  in Eq. (3.17). Top: Spacings s1 between previously degenerate eigenvalues (for small ). Middle: Spacings s2 between previously
nondegenerate eigenvalues (also for small ). Bottom: All spacings (for large ). The histograms show the numerical data, while the full
curves are the 2 2 GUE surmise P2 (top), the surmise P412
(s2 ; ) given in Eq. (3.19) (middle), and the surmise P12 (s; ) given in Eq. (2.33),
the latter two with tted coupling parameter . The quantity 2 dened in Appendix F is a measure of the t quality, which is small for a good
t. The numerical data were obtained by diagonalizing 50 000 random matrices of dimension 400 for each plot.

case of Poisson) was broken or restored. In contrast, we now IV. SUMMARY


have two transitions. As  increases from zero, an antiunitary
We have derived generalized Wigner surmises for the
symmetry T with T 2 = 1 gets broken. As  decreases
nearest-neighbor spacing distributions of various mixed RMT
from innity, either an antiunitary symmetry with T 2 = 1 gets
ensembles from 2 2 and 4 4 matrices. If the GSE was
broken (in the case of GOE) or integrability gets broken (in
involved in the transition, we have distinguished two cases: (i)
the case of Poisson). For intermediate values of  the system
perturbations of the GSE by a self-dual ensemble and (ii) per-
follows GUE statistics because all antiunitary symmetries
turbations of the GSE by a non-self-dual ensemble, for which
and/or integrability are broken. This is illustrated in Fig. 11.
we separately considered two different kinds of spacings.
We have shown that all of these distributions yield a
good description of the spectra of large mixed matrices when
restricted to a range of constant spectral density. The coupling
parameters in the generalized Wigner surmise and in the large
T2 = T2 = mixed matrices are related via the local eigenvalue density of
GSE GUE GOE the latter. This relation is well approximated by Eq. (3.7).
broken broken
We expect that the results for P (s) derived in this paper
will be useful in numerical and/or experimental studies of
0 1 systems with mixed symmetries, such as those mentioned in
Sec. I. P (s) is a convenient quantity that is easily analyzed
FIG. 11. Schematic picture of the transition from GSE to non- numerically or experimentally and typically does not suffer
self-dual GOE, which proceeds via an intermediate transition to the from serious unfolding issues. In particular, the properties of
GUE. An analogous picture applies to the transition from GSE to the level spacings should help us to clarify whether the mixing
non-self-dual Poisson. of the symmetry classes in a given physical system is of the

061130-15
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

additive type (1.4) we have investigated here. If so, ts to the with a maximum at pmax = S/2 . Standard manipulations
generalized Wigner surmises provide estimates of the coupling then yield the saddle-point result
parameter in terms of the local eigenvalue density. In turn,
I (S) eS [1 + O(S 1 )]. (A6)
the coupling parameter could quantify other properties of the
mixed systems. The normalized distributions are obtained from I (S) by
rescaling the spacing and restoring the normalization factors
that were omitted in the calculation above, resulting in
ACKNOWLEDGMENTS
P0 (s; ) = e2Ds [2De + O(s 1 )],
2

We thank Thomas Guhr for communication at an early stage (A7)


of this work. We also acknowledge DFG (BR 2872/4-2 and where we replaced S by 2Ds with D given in Eqs. (2.5), (2.13),
SFB/TR-55) and EU (StrongNet) for nancial support. and (2.27) for = 1,2,4, respectively. The meaning of this
result is that for arbitrarily large (but nite) , i.e., arbitrarily
APPENDIX A: ANALYSIS OF THE LARGE-s BEHAVIOR close to the pure Gaussian ensemble, the large-s behavior is
Poisson-like. This is in contrast to the small-s behavior, which
We consider the large-s behavior of the spacing distri- is dominated by the Gaussian ensemble for arbitrarily small
butions for the three transitions from Poisson to RMT. For (but nonzero) . The ndings for the large-s behavior were
simplicity, we rst consider the non-normalized spacing S and also conrmed numerically.
convert to the normalized spacing s at the end. We start with
the initial ansatz for the distributions, APPENDIX B: ANALYSIS OF THE GIBBS PHENOMENON
 
1
The spacing distribution of ensembles interpolating be-
I (S) = dp da db dc P0 (p)Pa (a)Pb (b)Pc (c ) tween Poisson and RMT reveal a Gibbs-like phenomenon close
=0 to the Poisson limit, i.e., for small : P (s; ) does not converge
"
# uniformly to the Poisson curve es at s = 0. Rather, there is
# 
1
S $[a (b + p/)]2 + 4 c c , an overshoot whose amount does not vanish in the 0
=0 limit and whose position s approaches 0 in this limit. In this
(A1) Appendix we work out the value and position of this maximum.
We start with a brief review of the Gibbs phenomenon in
which is the generalization of Eq. (2.23) to = 1, 2, and 4. the Fourier transform, as known from textbooks such as [43]
As in Appendix C 1, we introduce new variables u = a + b (which, however, mostly discuss the Gibbs phenomenon only
and t = a b, transform the c to spherical coordinates, and in the Fourier series).
eliminate the function by integrating out the radius. This
yields 1. Gibbs phenomenon in the Fourier transform
  S The Gibbs phenomenon is related to the convergence of
p2 pt
dt (S 2 t 2 ) 2 1 Se 42 p 2 4 S
1 2
I (S) dp the inverse Fourier transform with a cutoff in the integral (or
0 S to the convergence of the Fourier series with a cutoff in the
  1
14 S 2 sum) toward the original function f . Let us denote its Fourier
dp ep 2p
dx (1 x 2 ) 2 1 epxS
2
S e
0 1 transform by F ,
 
14 S 2 1
dp ep 2p
ds eis f (s),
2
S e X (pS), (A2) F () = (B1)
0 2
where we substituted t = xS, rescaled p 2p, and ex- and the result of the cutoff inverse transform by f with two
pressed the x integral (up to normalization) as arguments,
 1/
1
X (pS) = (pS) 2 I 1 (pS), (A3) f (s; ) = d eis F (). (B2)
2
1/
where I is a modied Bessel function [[35], Eq. (9.6.18)]. These formulas can be combined into a convolution
We now compute the integral in Eq. (A2) in saddle-point 
approximation, assuming S to be large. The asymptotic f (s; ) = ds  f (s  ) (s s  ) (B3)
expansion of X for = 1,2,4 reads

of the original function with the Dirichlet kernel


1
 1/
X (pS) = (pS) 2 epS . (A4) 1  sin[(s s  )/]

2 (s s ) = d ei(ss ) = . (B4)
2 1/ (s s  )
1
For p = O(S ) we cannot use this expansion, but the
contribution of this region to the integral can be shown to The question is how f (s; ) is related to the original function
be negligible compared to the leading order we consider here. f (s) in the 0 limit.7 If f (s) is smooth and absolutely
The exponential in the integrand is now

ep 2p+pS
2
, (A5) 7
The 0 limit of f (s; ) is also denoted as the principal value.

061130-16
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

integrable, f (s; ) approaches it everywhere. Accordingly, the 1.1


Dirichlet kernel approaches the delta distribution in the sense
of acting on smooth test functions. 1
At discontinuities of the original function f (s), however,
f (s; ) approaches the average of the left and right limits of
0.9
f (s). Intuitively, this comes from the nonzero width of the
Dirichlet kernel,8 which in the convolution (B3) probes both
sides of the discontinuity. Furthermore, the functions f (s; ) 0.8
for xed possess maxima and minima whose positions
move, in the limit 0, toward the discontinuity and whose 0.7
values over- and undershoot the function. This is the Gibbs
phenomenon.
0.6
For deniteness let us consider a set of exponentially
decaying functions with a jump discontinuity of unit size9
at s = 0, 0.5

0 for s < 0, 0 0.02 0.04 0.06 0.08 0.1
f (s) = ds (B5) s
e for s > 0, 1.1

that include the Poisson curve (d = 1) and the Heaviside


function (d = 0). The Fourier transforms are 1

1 1
F () = , (B6)
2 d + i 0.9

and the cutoff inverse transforms read


     0.8
i ds s s
f (s; ) = e Ei ds i Ei ds + i . (B7)
2
0.7
For the Poisson curve these functions are plotted for three small
values of in Fig. 12 (top), where several maxima above and
0.6
several minima below es are clearly visible.
To analyze the limit 0 we can zoom in on the region of
small s, of size proportional to . This amounts to considering 0.5
functions of a rescaled argument
0 5 10 15 20 25 30
s s
s = (B8)

FIG. 12. (Color online) The Gibbs phenomenon in the Fourier
in a constant-s range. We dene transform of the Poisson curve, with = 0.01,0.005,0.0025, respec-
f(s; ) = f (s; ) tively. Top: The functions f (s; ) approaching the original function
es (dashed), rst maximum moving to the left as decreases.
i d s
= e [Ei(d s i s) Ei(d s + i s)]. (B9) Bottom: The rescaled functions f(s; ) approaching the limiting
2 function g(s) (dashed) with decreasing (see text).
If we keep s xed, these functions have a well-dened limit
as 0,
g(s) = lim f(s; ) Concerning the convergence of the Fourier transform, we
0
i 1 Si(s) conclude that in the limit 0 the functions f (s; ) have a
= [Ei(i s) Ei(i s)] = + , (B10) maximum at s = , with an overshoot approaching 8.9%.
2 2 Note that the limiting function g is the same for all these
& s
with the sine integral Si(s) = 0 dx sin x/x. As Fig. 12 functions independently of the decay constants d; i.e., it
(bottom) shows, this limiting function captures innitely is solely determined by the discontinuity. In other words,
many maxima at s = ,3, . . . and innitely many minima the smooth part of the function f (s) drops out when going
at s = 2,4, . . . . The overshoot at the rst maximum is the from f(s; ) to g(s) in the 0 limit [see Eq. (B9) versus
well-known number Eq. (B10)]. This can be shown to be universal. Rescaling the
1 Si( ) integration variable in Eq. (B3) and using (x) = 1 (x) one
+ 1 = 0.0894899. (B11) has
2

  
8
This nonzero width is relevant in many areas of physics such as   s 
f (s; ) = ds f (s )1 s , (B12)
band-limited signals, ringing, and diffraction of waves at slits.
9
As all formulas are linear in f (s), the case of arbitrary jumps is 
0

completely analogous. f(s; ) = ds  f (s  )1 (s s  ), (B13)
0

061130-17
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

where we still assume f (s < 0) = 0 for simplicity. The


limiting function is 1.4
 s  
1 Si(s) 1.2
g(s) = f (0+ ) dt 1 (t) = f (0+ ) + , (B14)
2
1
which agrees with (B10) for all functions with f (0+ ) = 1.
The last equation in particular relates the Dirichlet kernel 0.8
and the limiting function g. Therefore, f (s; ) can also be
reconstructed by a convolution with (the derivative of) g, 0.6
  
  1  s s
f (s; ) = ds f (s ) g , (B15) 0.4
0
where g  (x) = dg/dx. 0.2

0
2. Gibbs phenomenon in the Poisson to RMT transitions 0 1 2 3 4 5 6 7 8 9 10
s
For the Gibbs-like phenomenon in the mixed spacing
distributions we start with the Poisson to GSE case. For this FIG. 13. (Color online) The rescaled spacing distributions
transition we found the spacing distribution Eq. (2.26). In P04 (s; ) in the Poisson to GSE transition, Eq. (B16), for =
analogy to the previous section we rescale the argument and 0.05,0.025,0.01 (maxima increasing) approaching the limiting func-
dene tion g04 (s), Eq. (B18) (dashed curve).

P04 (s; )
= P04 (s; ) = C4 s 4 eD s
2 2 2
These have maxima at s = 2.51 and s = 3.00, overshooting
 1 the Poisson curve by 17.5% and 28.5%, respectively, as quoted
dx (1 x 2 ) e(xDs) +2 xDs erfc(xDs + ).
2 2
in the body of the paper. We observe that these numbers grow
1 with the Dyson index  of the perturbing ensemble.
(B16) From the small-s behavior g04 (s) = s 4 /12 + O(s 6 ) we
conclude that P04 (s; ) = s 4 /(124 ) + O(s 6 ) for small
In the limit 0 we make use of the behavior of C() and , which reproduces our observation in Eqs. (2.30) and
D(), (2.31). Analogous agreement is obtained with Eqs. (2.9),
1 1 (2.10), (2.19), and (2.20) for the other two cases.
D() and C() , (B17) This concludes our empirical results on the Gibbs-like
2 (2)4
phenomenon.
to arrive at the limiting function For the analogies at a more fundamental level, the spacing
distribution P04 (s; ) is related to the integral (2.23)
g04 (s) = lim P04 (s; )
0
 
s 4 s2 1 1
= e 4
2
dx (1 x 2 ) e(x s/2) erfc
x s I (S/) = dp ep (S,p) (B21)
16 2 0
1
s s 2 of the unperturbed Poisson distribution with the kernel
= [(2 + s 2 ) e 4 er(s/2) 2s]. (B18)
8 
In Fig. 13 we plot this function, together with P04 (s; ), (S,p) = da . . . dc3 Pa (a) . . . Pc3 (c3 )
approaching it for small . 
Again, this limiting function captures a maximum, which (S (a b p/)2 + 4(c c )). (B22)
can numerically be determined to be at s = 3.76023 with
a value of 1.43453. As before, the phenomenon is solely The nonzero width of this kernel causes the Gibbs phenomenon
determined by the discontinuity of the Poisson curve at s = 0 in the spacing distribution near the discontinuity of the Poisson
as the original Poisson distribution ep can be shown to drop distribution ep at p = 0. Note that in the limit 0 the
out from Eqs. (B16) to (B18). second line of Eq. (B22) approaches (S p), thus decoupling
So in the transition from Poisson to GSE, P04 (s;) in the from the integrals over a, . . . ,c3 . The latter are normalized by
limit of small has a maximum at s = 3.76, overshooting construction so that the kernel (S,p) approaches (S p).
the Poissonian es by 43.5%. Likewise, in the other transitions There are (at least) two features that are different from
P01 (s; ) and P02 (s; ) we have equivalently dened the Fourier case. First, the kernel is not a function of S p,
limiting functions and thus Eq. (B21) is not a convolution, in contrast to

s 2 /8 the Fourier case, Eq. (B15). Second, at the discontinuity
g01 (s) = s e I0 (s 2 /8), (B19) P04 (0; ) = 0 is not the average (equal to 1/2) of the left and
2
right limits of the original Poisson curve ep (put to zero for
s 2 /4
g02 (s) = s e er(s/2). (B20) negative p).
2
061130-18
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)


APPENDIX C: EXPLICIT CALCULATION OF SPACING a 0 c0 + ic3 c1 + ic2
DISTRIBUTIONS 0 c1 + ic2 c0 ic3
a
+ ,
1. Poisson to GSE c0 ic3 c1 ic2 b 0
We start from Eq. (2.25), transform c0 , . . . ,c3 to spherical c1 ic2 c0 + ic3 0 b
coordinates with c2 = c c , and introduce u = a + b and (C7)
t = a b. This yields
where the variances of the random variables are given by
 
p(ut) p2 Eq. (2.2). If two variables are Gaussian distributed with
du dt ep 4 (u +t )+ 2 22 c
1 2 2 2
I (S) dp dc c 3
variances 12 and 22 , their sum is again Gaussian distributed
0
 with variance 12 + 22 . Since H depends on A, B, and C and
(S t + 4c2 )
2
a, b, and c0 only through the combinations A + a, B + b,
 
pt p2 and C + c0 , we can immediately integrate out A, B, and C,
dt ep 2 42 4 (t +4c )
1 2 2
dp dc c3 with the corresponding change in the variances of a, b, and
0
 c0 . To simplify the notation,we absorb in H4 and divide
(S t 2 + 4c2 ), (C1) all matrix elements of H by 1 + 2 . This yields a problem
equivalent to Eq. (C7),
where in the last step we have integrated over u. We now use
the function to integrate over c, resulting in a 0 c0 + ic3 c1 + ic2
0 c1 + ic2 c0 ic3
  a
2
S4 p2
p
S
pt H , (C8)
I (S) S e dp e 42 dt (S 2 t 2 )e 2 c0 ic3 c1 ic2 b 0
0 S
 c1 ic2 c0 + ic3 0 b
2
S4 p2
2
p z cosh z sinh z
S4 e dp e 4
with
0 z3
J (S), (C2) 2
2
a,b = 2c20 = 1, 2c2i = 2, (C9)
1 + 2
where z = pS/2. where i = 1,2,3. The matrix in Eq. (C8) has two nondegener-
The normalized level spacing distribution P04 (s), ate eigenvalues whose spacing is given by
Eq. (2.26), is obtained from J (S) by rescaling and nor-
  1/2
malization, i.e., P04 (s) = C J (2Ds)/(2D)4 . Dening the 3

moments of the distribution, S = (a b) + 4


2
c c , (C10)
 =0

In = dS S n J (S), (C3) where we have again written S instead of s since we still need
0 to enforce the normalizations (1.2). The spacing distribution
is proportional to the integral
we obtain from Eq. (1.2) 
ci ci
da db dc0 dc1 dc2 dc3 e 2 (a +b +2c0 ) 2
1 2 2 2
I (S) =
I1 (2D)5
D= and C= . (C4) 
2I0 I0  
S (a b)2 + 4c02 + 4cj cj , (C11)
Explicit evaluation of I0 and I1 gives
where repeated indices indicate a sum over i and j from 1 to
3. We now transform c1 , c2 , c3 to spherical coordinates with
I0 = 4 , (C5)
c2 = ci ci and introduce u = a + b and t = a b. This yields
 
I1 = 4 dx e2x I (S)
1 2 2 2 c2
du dt dc0 dc c2 e 4 (u +t +4c0 ) 2
0

(4x 3 +2x)ex + (4x 4 +4x 2 1) erf(x)  
2

, (C6) S t 2 + 4c02 + 4c2 . (C12)


x3
The integral over u can be performed trivially and only results
from which we obtain Eqs. (2.27) and (2.28). in a prefactor. Using the function to integrate over c, we
obtain

2. GOE to GSE
dt dc0 e 4 (t +4c0 ) 4 2 (S t 4c0 )
1 2 2 1 2 2 2
I (S)
We consider the matrix H in Eq. (2.40). With a small change 0

in notation for H1 , we have  
S S 2 t 2 4c02 S 2 t 2 4c02 , (C13)

A 0 C 0 where we have used the symmetries of the integrand to raise
0 A 0 C
the lower limit of the integrations to zero. We now perform the
H = H1 12 + H4 = transformation
C 0 B 0 
0 C 0 B t = Sx and c0 = 12 Sy 1 x 2 (C14)

061130-19
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)


with Jacobian 12 S 2 1 x 2 . Since t and c0 are non-negative, We show in the following that M is a 2 2 GUE matrix
so are x and y. The function in Eq. (C13) then implies for  = 2 as well as for  = 1, in the latter case with a
0  x,y  1. Reinserting the denition of 2 from Eq. (C9), normalization different from Eq. (2.2).
we obtain
 1  1. GUE
S2
dx dy (1 x 2 ) 1 y 2 e 42 [ +(1x )(1y )] .
2 2 2
I (S) S 4
0 This case is very simple, because the GUE is invariant
(C15) under unitary transformations, which contain the symplectic
transformations. This means that the transformation diagonal-
We now substitute y = cos , note that cos2 = 12 (1 izing the GSE matrix H4 can be absorbed in H2 without loss
cos 2), and use the integral representation [[35], Eq. (9.6.19)] of generality, and therefore one can choose |i k = ik with
of the modied Bessel functions I0 and I1 to obtain after some i = 1,2 and k = 1, . . . ,2N . Thus, we obtain
algebra
 1 
2N
1+22 2
I (S) S 4 e 82 S
S2 x2
dx (1 x 2 ) e 82 [I0 (z) I1 (z)] Mij = ik (H2 )kl lj = (H2 )ij , (D3)
0 k,l=1
J (S), (C16) which is obviously a 2 2 matrix from the GUE with the usual
with z = (1 x 2 )S 2 /(82 ). This corresponds to Eq. (2.41) normalization, Eq. (2.2). As this holds also for N = 2, it is a
with S = 8Ds. The properly normalized spacingdistribu- perturbative explanation for the fact that in the limit 0
tion is therefore given by P14 (s) = CJ ( 8Ds)/( 8D)4 . the spacings between previously degenerate eigenvalues are
Dening distributed exactly like the ones of 2 2 GUE matrices.

In = dS S n J (S) (C17) 2. GOE
0
we obtain from Eq. (1.2) We will show that in this case M is again a matrix from the
GUE with the only difference that the variances of its elements
I1 ( 8D)5 are only half as large as in the previous section. This case is
D= and C= . (C18)
8I0 I0 a bit more involved because one cannot generally diagonalize
a self-dual matrix by an orthogonal transformation (which
Explicit evaluation of I0 and I1 gives would preserve the probability distribution of H1 ), and thus
 3/2
2 it is impossible to choose the eigenvectors of H4 as in the
I0 = 8 , (C19) previous section. Explicitly, the matrix elements read
1 + 2
   ' '
 ' '

3 (1 )
2
Mij = i |H1 |j  = ire 'H1 'jre + iim 'H1 'jim
I1 = 16 + arccot , (C20)  ' '
 ' '

(1 + 2 )2 + i re 'H1 ' im im 'H1 'j re , (D4)
i j i
from which we obtain Eqs. (2.42) and (2.43).
where we split the eigenvectors |i  into real and imaginary
parts: |i  = |i re  + i|iim , and H1 is real.
APPENDIX D: PERTURBATION OF A LARGE GSE We will now show that the four vectors |1re , |1im , |2re ,
MATRIX BY A NON-SELF-DUAL MATRIX and |2im  are orthogonal in the limit of innite matrix size.
We consider a mixed 2N 2N matrix that interpolates For some combinations of them one can show this also for
between the GSE and one of the other Gaussian ensembles, nite N using the quaternionic structure of the eigenvectors,
 
 1 |
H = H4 + H  , (D1) = (q1 q2 qN ), (D5)
4 (0)s4 2 |
where H4 is taken from the GSE and H  from the GOE
with quaternions in matrix representation
or GUE. We study this matrix for large N in rst-order  (0) 
degenerate perturbation theory to show similarities between qk + iqk(3) qk(1) + iqk(2)
the two different perturbations and to make a connection to qk = . (D6)
the case of GSE to non-self-dual GUE for N = 2, which was qk(1) + iqk(2) qk(0) iqk(3)
treated in Sec. II H. One can read off immediately that
Degenerate perturbation theory predicts that each of the N  re ' re
 im ' im

previously degenerate eigenvalue pairs splits up and that the 1 '2 = 1 '2 = 0, (D7)
shifts of the two members of the pair are the eigenvalues of  re ' im
 '

the matrix 1 '1 = 2re '2im , (D8)


  re ' im
 im ' re

4 (0)s4
Mij with Mij = i |H  |j ; i,j = 1,2. (D2) 1 '2 = 1 '2 ; (D9)

The |1,2  are the orthonormal eigenvectors of the unperturbed i.e., there are only two independent scalar products.
()
matrix H4 that span the degenerate subspace of the eigenvalue Let us assume that for large N the qk can be treated as
pair under consideration. independent random variables with mean value zero. Then the

061130-20
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

mean values of those scalar products are zero as well, e.g., where H  is chosen in the usual normalization [see Eq. (2.2)].
The calculations are done for arbitrary matrix dimension,
 '


N
 (0) (3)
which will be sent to innity at the end. We denote the number
1re '1im = qk qk + qk(1) qk(2) = 0, (D10)
of generically nondegenerate eigenvalues by N ; i.e., we
k=1
consider N N matrices. If H  is taken from the GSE, these
where the outer angular brackets indicate an average over are quaternion valued and correspond to complex 2N 2N
the random matrix ensemble. From the normalization of the matrices.
()
eigenvectors |i  the variances of the qk are proportional to To obtain an N -independent eigenvalue density of the
1/N. This yields for the variances of the scalar products Poissonian ensemble, we dene the probability distribution
of the individual eigenvalues i of H0 by
 '
2

N
 (0) 2
 (3) 2
 (1) 2
 (2) 2

1re '1im = qk qk + qk qk 1
k=1 P0 (i ) = P0 (i /N ), (E2)
N
N
1 1
2
= (D11) where P0 is some N -independent probability distribution.
N N
k=1 Both P0 and P0 are normalized to one. The eigenvalue density
and likewise for 1re |2im . Since in the N limit both the of the Poissonian ensemble is thus
mean values and the variances of the scalar products vanish,
the four vectors become orthogonal in this limit for every 0 ( ) = N P0 ( ) = P0 (/N) = P0 ( ), (E3)
single realization of the random matrix. We have checked
where we have dened = /N. Generically, we have i =
this numerically, which implies that the assumption of the
O(N ) and i = O(1).
independence of the qk() was valid. We now consider a xed spacing S between two adjacent
As for the normalization of the four vectors, the squared eigenvalues of H0 , 1 and 2 = 1 + S. The remaining eigen-
norms of the real and imaginary parts agree on average and values have to reside outside the interval (1 ,2 ). This results
sum up to 1 due to the normalization of the eigenvectors |i . in the conditional probability distribution
Invoking the central limit theorem, we observe that in the limit
N the norms of the real and imaginary parts equal 1/ 2 1 out
even for a single realization of the random matrix. Hence, P0out (i ) = P (i /N )
N 0
multiplying the four real vectors |1re , |1im , |2re , and |2im  
0 for i (1 ,2 ),
by 2, one obtains, in the limit N , an orthonormal real = P () (E4)
& 0 otherwise.
basis in the subspace under consideration. 1 2 d  P0 (  )
1
Finally, we use the fact that the matrix elements of a
GOE matrix H1 are independent random numbers in every The eigenvalue density is assumed to be almost unaffected
orthonormal (real) basis, with variances 1 and 1/2 on and off by the perturbation, which is conrmed in Fig. 6 (top). Of
the diagonal, respectively. Thus we conclude that the Mij are course, this assumption is expected to hold only for small
also independent random numbers with variances values of the coupling parameter.
  re 2
  re 2
1 We want to calculate the effect of the perturbation on the
M11 = M22 = 2, (D12) spacing S. If the remaining eigenvalues of H0 are close to 1
  re 2
  im 2
1 or 2 we have to apply almost-degenerate perturbation theory.
M12 = M12 = 4. (D13) Up to second order in we obtain for the perturbation of the
These are half the variances of a GUE matrix, which is spacing,
equivalent
to a multiplication of each element of M by
S = (EVD[(H0 + H  )kl|k ,l W ] S)
1/ 2. This explains the rescaling of the coupling parameter  
1 2
in the denitions of P41 (s1 ; ) and P41 (s2 ; ), Eqs. (3.18) rst-order almost-degenerate perturbation theory
and (3.19). N  
|(H  )2i |2 |(H  )1i |2
For small N the argument in this section does not work. + 2 , (E5)
Presumably, this is the reason why the spacing distributions i=3|i W
/
2 i 1 i
for the transition from GSE to GOE differ from those for the  
second-order perturbation theory
transition from GSE to GUE in the case of 4 4 matrices (not
shown in this paper, but checked numerically), whereas they where the absolute values are taken with respect to the real,
match very well for large matrices. complex, or quaternionic standard norm, EVD denotes the
difference of the two eigenvalues of the matrix (H0 + H  )kl
APPENDIX E: PERTURBATIVE CALCULATION OF THE that correspond to the unperturbed eigenvalues, and W is
RELATION BETWEEN EIGENVALUE DENSITY AND the interval in which eigenvalues have to be considered
COUPLING PARAMETER almost degenerate with 1 or 2 . This is dened by the
eigenvalue range (1 CW ,2 + CW ), where we choose CW =
We consider a diagonal Poissonian matrix H0 perturbed by (0) (0)
a matrix taken from one of the Gaussian ensembles H  , CW N with 0 < < 1 and CW > 1. This choice ensures
that the closest possible eigenvalue outside W cannot give
H = H0 + H  , (E1) a second-order contribution of lower order in than the

061130-21
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

almost-degenerate part.10 Note that the degenerate window where we dened a = |(H  )1i |2 and b = |(H  )2i |2 . Its prob-
W grows with N. Therefore arbitrarily distant eigenvalues ability distribution is given by
are considered almost degenerate in the limit N , which  1 CW  
1 out
is justied because almost-degenerate perturbation theory is Px (x) = + d P0 (/N)
valid for any difference of eigenvalues. 2 +CW N

Considering the rst-order contribution, we have to deal
with the matrix da db P  (a)P  (b)
0
  
b a
Mkl = (H0 )kl + (H  )kl = k kl + (H  )kl , (E6) x 2
, (E11)
2 1
where the indices k and l run over all values for which the where we renamed i = for convenience. The distribution
eigenvalues k and l are localized in W , which includes at P  depends on the symmetry class of the perturbing ensemble
least 1 and 2 . This is a matrix taken from the Poissonian (a and b are squared sums of  Gaussian random variables).
ensemble perturbed by a matrix taken from one of the Gaussian The moments of this distribution are
ensembles, but unlike H dened in Eq. (E1) it has a constant 
eigenvalue density in the limit N . To show this, we rst pm = dx Px (x) x m . (E12)

consider the density at the lower end of the interval W ,
After a short calculation, we obtain
   0    
1 CW (0)
lim N P0 (1 CW ) = lim P0 1 out CW
N N N N pm = d da db P + 1
 N 0 N
(0) 1 
0
= lim P0 1 CW N  (0) 
N out S CW
+ P0 + 1 P  (a)P  (b)
= P0 (1 ) = 0 (1 ). (E7) N
  m
b a
This is the same as the eigenvalue density at the other end 2 (0)
(0)
.
S + CW N CW N
of W ,
(E13)
 
1 S + CW In the limit N , all terms that are divided by N in the
lim N P0 (2 + CW ) = lim P0 +
N N N N arguments of P0out can be neglected. This can be done in spite of
  being integrated to , because the last part of the integrand
S (0) 1
= lim P0 1 + + CW N (in square brackets) suppresses the large- region and because
N N
P0out is a probability density that has to converge to 0 for large
= P0 (1 ) = 0 (1 ). (E8) argument. Also, limN P0out (1 ) = P0 (1 ) = 0 (1 ). We thus
obtain
Thus the spectrum of M can be unfolded by multiplying with  
20 (1 ) 0
the local eigenvalue density, pm = d da db P  (a) P  (b)
N 0
  m
0 (1 ) Mkl = 0 (1 )k kl + 0 (1 ) (H  )kl . (E9) b a
    2
(0)
(0) .
unfolded effective coupling S + CW N CW N
(E14)
Therefore we can dene a new effective coupling parameter
that solely determines the magnitude of the perturbation as in Let us denote the second line of Eq. (E5) by S (2) . It is
Sec. III A. O(Np1 ), and therefore its mean value becomes zero for
The second-order contribution to S in Eq. (E5) is a sum N , as it is suppressed by N . The same holds for
of at most N 2 independent random numbers. As all of these the second moment of S (2) , which goes like N 2 . Thus
random numbers have the same distribution we pick out one the distribution of S (2) is a delta function at zero, and
of them, we can neglect its contribution to the perturbation of the
spacing. The linear relation between eigenvalue density and
  coupling parameter could hence be shown up to second-order
b a
x= 2
with i
/ W, (E10) perturbation theory.
2 i 1 i
APPENDIX F: METHOD FOR FITS TO THE SURMISES
Since most of the analytical formulas for the small matrices
contain integrals, it takes some time to compute them numer-
ically. In order to get good ts to data in a reasonable time, a
10
An eigenvalue i = 2 + CW at the border of W yields list of 1000 values in the interval (0.01,10) was created, with
(0)
a second-order shift of 2 = 2 |(H  )i2 |2 /(CW N ) = i1
(0)
|(H  )i2 | /(CW N ).
2
i = 0.01 1000 999 ; i = 1, . . . ,1000. (F1)

061130-22
WIGNER SURMISE FOR MIXED SYMMETRY CLASSES IN . . . PHYSICAL REVIEW E 85, 061130 (2012)

For each i and each surmise, the corresponding spacing We now show that the transformed matrix O T MO is self-
distribution was stored. The pure cases = 0 and = were dual, the condition for which is
included as well. !
As a measure of the t quality, we use the L2 distance O T MO = J (O T MO)T J T = J O T M T OJ T
 (1/2 !
M = OJ OM T OJ T O (G3)
2 = dx [f (x) g(x)]2 (F2)
with
 
between the t and the numerical data. The tting was done 0 1
J = 1N . (G4)
by calculating the 2 value of each spacing distribution in the 1 0
list. From the one resulting in the smallest 2 we read off Multiplying J by O from the left and the right interchanges the
the coupling . Note that the largest 2 we encounter in all second, fourth, . . . with the (N + 1)th, (N + 3)th, . . . . column
the ts is 0.019. For comparison, the L2 norms of the pure and row. We thus obtain
Wigner surmises P (s) range from 0.71 for = 0 to 0.94 for  
= 4. We give no error bars, because the statistical errors 0N 1N
OJ O = ,
of obtained by methods such as the jackknife method were 1N 0N
negligibly small. This is also the reason why we use 2 instead  
of a statistical quantity like chi-squared as a measure of the t 0N 1N
OJ O = OJ O =
T
, (G5)
quality. 1N 0N
and hence
   
0N 1N H T 0N 0N 1N
APPENDIX G: CONSTRUCTION OF A SELF-DUAL GUE OJ OM OJ O =
T T
1N 0N 0N H 1N 0N
In the following we construct a Hermitian, self-dual  
2N 2N matrix whose eigenvalues are twofold degenerate H 0N
= = M, (G6)
and whose nondegenerate eigenvalues correspond to those of 0N HT
a matrix from the GUE. We start with a matrix M that contains
which proves Eq. (G3). O T MO can therefore be written as a
an N N GUE matrix H and its complex conjugate (equal to
quaternion matrix with real quaternions and their conjugates
the transpose),
at the transposed position. Each of these quaternions stands
 
H 0N for a matrix of the form
M= . (G1)    
0N H c0 + ic3 c1 + ic2 q p
= , (G7)
The eigenvalues of M are obviously those of H , but now c1 + ic2 c0 ic3 p q
twofold degenerate as desired. However, M is not self-dual. with complex numbers p and q. Evidently, p has to be
To transform M into a self-dual matrix without changing its zero for each quaternion in O T MO, because our original M
eigenvalues, we apply an orthogonal transformation generically contains no element which is the negative complex
1 0 0 0 0 0 0 0 0 0 0 0 conjugate of any other, and we only exchanged elements by
0 0 0 0 0 0 1 0 0 0 0 0
00 00 10 00 0 0 0 0 0 0 0 0 applying O. This means that at least half of the matrix elements
0 0 0 0 1 0 0 0 are zero. In the original M, exactly half of the matrix elements
.. .. .. .. .. . ..... .
. . . . . .. .. .. .. .. . . .. were zero, while the other half were random variables which
0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0
10 0000 depended on a total of N 2 real parameters, so the same has

O= = O T = O 1 , (G2) to hold for O T MO. From this and Hermiticity if follows that
0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 every off-diagonal q has to be an independent complex random
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0
number, while the q on the diagonal are real, so that there are
. . . . again N 2 real degrees of freedom.
. . . . . . .. .. .. .. .. . . ..
.... . . .... . . With this equivalence proven, one can construct a self-dual
0000 01 0000 00
0 0 0 0 0 0 0 0 0 0 0 1 GUE matrix by taking a matrix from the GSE and set its c1
which transforms a matrix by exchanging every 2nth row and and c2 components to zero. This matrix has the same joint
column with the (N + 2n 1)th one. Each of the four blocks is probability density of the eigenvalues as an N N matrix
a square matrix of dimension N . This is in complete analogy to taken from the GUE, as it is related to a matrix of the form of
the construction of a self-dual 4 4 GUE matrix in Sec. II G. M by a xed basis transformation.

[1] T. Guhr, A. Muller-Groeling, and H. A. Weidenmuller, Phys. [4] T. A. Brody, J. Flores, J. B. French, P. A. Mello,
Rep. 299, 189 (1998). A. Pandey, and S. S. M. Wong, Rev. Mod. Phys. 53, 385
[2] J. Verbaarschot and T. Wettig, Annu. Rev. Nucl. Part. Sci. 50, (1981).
343 (2000). [5] M. L. Mehta, Random Matrices, 3rd ed. (Elsevier/Academic,
[3] G. Akemann, J. Baik, and P. Di Francesco (eds.), The Oxford Amsterdam, 2004).
Handbook of Random Matrix Theory (Oxford University Press, [6] R. Grobe, F. Haake, and H.-J. Sommers, Phys. Rev. Lett. 61,
Oxford, 2011). 1899 (1988).

061130-23
SCHIERENBERG, BRUCKMANN, AND WETTIG PHYSICAL REVIEW E 85, 061130 (2012)

[7] G. Akemann, E. Bittner, M. J. Phillips, and L. Shifrin, Phys. [25] T. G. Kovacs, Phys. Rev. Lett. 104, 031601 (2010).
Rev. E 80, 065201 (2009). [26] F. Bruckmann, T. G. Kovacs, and S. Schierenberg, Phys. Rev. D
[8] E. P. Wigner, Oak Ridge National Laboratory Report No. 2309, 84, 034505 (2011).
1957, p. 59. (unpublished) [27] M. V. Berry and M. Robnik, J. Phys. A 17, 2413 (1984).
[9] O. Bohigas, M. J. Giannoni, and C. Schmit, Phys. Rev. Lett. 52, [28] M. S. Hussein and M. P. Pato, Phys. Rev. Lett. 70, 1089
1 (1984). (1993).
[10] I. Dumitriu and A. Edelman, J. Math. Phys. 43, 5830 (2002). [29] J. M. Nieminen, J. Phys. A: Math. Theor. 42, 035001 (2009).
[11] T. Cheon, T. Mizusaki, T. Shigehara, and N. Yoshinaga, Phys. [30] T. A. Brody, Lett. Nuovo Cimento 7, 482 (1973).
Rev. A 44, R809 (1991). [31] M. V. Berry and P. Shukla, J. Phys. A 42, 485102 (2009).
[12] T. Shigehara, N. Yoshinaga, T. Cheon, and T. Mizusaki, Phys. [32] T. Guhr, Ann. Phys. (NY) 250, 145 (1996).
Rev. E 47, R3822 (1993), . [33] H. Kunz and B. Shapiro, Phys. Rev. E 58, 400 (1998).
[13] A. Csordas, R. Graham, P. Szepfalusy, and G. Vattay, Phys. Rev. [34] V. K. B. Kota and S. Sumedha, Phys. Rev. E 60, 3405 (1999).
E 49, 325 (1994). [35] M. Abramowitz and I. A. Stegun, Handbook of Mathematical
[14] A. Y. Abul-Magd, B. Dietz, T. Friedrich, and A. Richter, Phys. Functions, 9th Dover ed. (Dover, New York, 1964).
Rev. E 77, 046202 (2008). [36] M. Robnik, J. Phys. A: Math. Gen. 20, L495 (1987).
[15] G. Lenz and F. Haake, Phys. Rev. Lett. 67, 1 (1991). [37] H. Hasegawa, H. J. Mikeska, and H. Frahm, Phys. Rev. A 38,
[16] P. Shukla and A. Pandey, Nonlinearity 10, 979 (1997). 395 (1988).
[17] J. Sakhr and J. M. Nieminen, Phys. Rev. E 72, 045204 (2005). [38] E. Caurier, B. Grammaticos, and A. Ramani, J. Phys. A: Math.
[18] D. Wintgen and H. Friedrich, Phys. Rev. A 35, 1464 (1987). Gen. 23, 4903 (1990).
[19] J. Goldberg, U. Smilansky, M. V. Berry, W. Schweizer, [39] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series and
G. Wunner, and G. Zeller, Nonlinearity 4, 1 (1991). Products, 5th ed. (Academic, San Diego, 1994).
[20] B. I. Shklovskii, B. Shapiro, B. R. Sears, P. Lambrianides, and [40] K. Gottfried and T.-M. Yan, Quantum Mechanics: Fundamen-
H. B. Shore, Phys. Rev. B 47, 11487 (1993). tals, 2nd ed. (Springer, New York, 2004).
[21] P. Shukla, J. Phys.: Condens. Matter 17, 1653 (2005). [41] A. Pandey, Ann. Phys. (NY) 134, 110 (1981).
[22] F. J. Dyson, J. Math. Phys. 3, 1191 (1962). [42] J. B. French, V. K. B. Kota, A. Pandey, and S. Tomsovic, Ann.
[23] E. Follana, C. Davies, and A. Hart, PoS LAT 2006, 051 (2006). Phys. (NY) 181, 198 (1988).
[24] A. M. Garca-Garca and J. C. Osborn, Phys. Rev. D 75, 034503 [43] J. Walker, Fourier Analysis (Oxford University Press, Oxford,
(2007). 1988).

061130-24

Das könnte Ihnen auch gefallen