Beruflich Dokumente
Kultur Dokumente
Correspondence________________________________________________________________________
Iterative Water-Filling for Gaussian Vector transition probability p(y jx1 ; x2 ). For each fixed input distribution
Multiple-Access Channels p1 (x1 )p2 (x2 ), the following pentagon rate region is achievable:
R1 +R 2 I (X 1 ; X 2 ; Y ) (1)
Abstract—This correspondence proposes an efficient numerical algo-
rithm to compute the optimal input distribution that maximizes the sum where the mutual information expressions are computed with respect
capacity of a Gaussian multiple-access channel with vector inputs and to the joint distribution p(y jx1 ; x2 )p1 (x1 )p2 (x2 ). When the input dis-
a vector output. The numerical algorithm has an iterative water-filling tribution is not fixed, but constrained in some ways, the capacity re-
interpretation. The algorithm converges from any starting point, and
gion is the convex hull of the union of all capacity pentagons whose
it reaches within 1/2 nats per user per output dimension from the sum
capacity after just one iteration. The characterization of sum capacity also corresponding input distributions satisfy the input constraint after the
allows an upper bound and a lower bound for the entire capacity region to convex hull operation [3], [4]. Since the input signals in a multiple-ac-
be derived. cess channel are independent, the input distribution must take a product
Index Terms—Channel capacity, convex optimization, Gaussian chan- form p1 (x1 )p2 (x2 ). This product constraint is not a convex constraint,
nels, multiuser channels, multiple-access channels, multiple-access commu- so the problem of finding the optimal input distribution for a mul-
nications, multiple-antenna systems, optimization methods, power control, tiple-access channel is in general nontrivial [5]. The aim of this corre-
water-filling.
spondence is to provide a numerical solution to this input optimization
problem for a particular type of multiple-access channel: the Gaussian
I. INTRODUCTION vector multiple-access channel.
A Gaussian multiple-access channel refers to a multiple-access
A communication situation where multiple uncoordinated transmit- channel in which the law of the channel transition probability
p(y jx1 ; x2 ) is Gaussian. When a Gaussian multiple-access channel is
ters send independent information to a common receiver is referred
to as a multiple-access channel. Fig. 1 illustrates a two-user mul- memoryless and when X1 and X2 are scalars, the input optimization
tiple-access channel, where X1 and X2 are uncoordinated transmitters problem has a simple solution. Let the power constraints on X1 and
encoding independent messages W1 and W2 , respectively, and the re- X2 be P1 and P2 , respectively. Gaussian independent distributions
The situation is more complicated if simultaneous diagonalization is water-filling iteratively to reach a compromise among the signaling
not possible. This more general setting corresponds to a multiple-access strategies for different users. This iterative water-filling procedure
situation where both the transmitters and the receiver are equipped always converges, and it converges to the sum capacity of a vector
with multiple antennas. In the spatial domain, the channel gain multiple-access channel.
between a transmit antenna and a receive antenna can be arbitrary, The rest of this correspondence is organized as follows. In Section II,
so the channel matrix can have an arbitrary structure. In general, it is the input optimization problem for the Gaussian vector multiple-ac-
not possible to simultaneously decompose an arbitrary set of matrix cess channel is formulated in a convex programming framework. In
channels into parallel independent scalar subchannels. Unlike the Section III, an optimization condition for the sum rate maximization
ISI channels, where the time-invariance property leads to a Toeplitz problem is presented. In Section IV, the iterative water-filling algorithm
structure for the channel matrix, a multiple-antenna channel does is derived, and its convergence property is studied. Section V provides
not follow spatial invariance. Consequently, the equivalence of a capacity region bounds based on the sum rate points. Section VI con-
cyclic prefix does not exist in the spatial domain, and the transmitter tains concluding remarks.
optimization problem becomes a combination of choosing the optimal After this correspondence was initially submitted for review, the au-
signaling directions for each user and allocating a correct amount thors learned that a similar alternate input optimization procedure was
of power in each signaling direction. Such a joint optimization suggested by Médard in the context of single-antenna multipath fading
strikes a compromise between maximizing each user’s data rate channels [16]. The algorithm presented in this correspondence is based
and minimizing its interference to other users. Fortunately, it can on the same principle. The treatment here includes a more complete
be shown that the input optimization problem for Gaussian vector proof and a novel convergence analysis of the algorithm.
multiple-access channels is a convex optimization problem [10]. So, the
optimization problem is tractable in theory, and general-purpose convex II. PROBLEM FORMULATION
programming routines, such as the interior-point method [11], are
A memoryless K -user Gaussian vector multiple-access channel can
applicable. However, for a large-dimensional problem, the optimization
be represented as follows (see Fig. 2):
may still be computationally intensive, because the optimization is
performed in the space of positive semidefinite matrices, and the K
number of scalar variables grows quadratically with the number of Y = Hi X i + Z (2)
input dimensions. In the literature, the input optimization problem i=i
for a multiple-antenna multiple-access fading channel is considered where X i is the input vector signal, Y is the output vector signal, Z
in [12] where asymptotic results in the limit of infinite number of is the additive Gaussian noise vector with a covariance matrix denoted
users and infinite number of antennas have been reported. A similar as Sz , and Hi is the time-invariant channel matrix. The channels are
problem exists for the CDMA systems, where the matrix channel is assumed to be known to both the transmitters and the receiver. Fur-
determined by the spreading sequences. Recent results in this area ther, no feedback channel is available between the receiver and the
have been obtained in [13]–[15]. transmitters, thus, transmitter cooperation (beyond time synchroniza-
The main contribution of this correspondence is a numerical tion) is not possible. The input signals are assumed to be independent,
algorithm that can be used to efficiently compute the sum capacity with a joint distribution 5Ki=1 pi (xi ) that satisfy the power constraints
achieving input distribution for a Gaussian vector multiple-access tr(E [X iX Ti ]) Pi . Let Si be the covariance matrices of X i , i.e.,
channel. It is shown that the joint optimization of signaling power Si = E [X iX iT ]. Then, the power constraint becomes tr(Si ) Pi .
and signaling directions for a vector multiple-access channel can be The capacity region for a K -user multiple-access channel is the
performed by a generalization of the single-user input optimization. In convex hull of the union of capacity pentagons defined in (2). For
a single-user vector channel, the optimal signaling directions are the Gaussian vector multiple-access channels, the convex hull opera-
eigenmodes of the channel matrix, and the optimal power allocation is tion is not needed, and the capacity region can be characterized
K i Ri , with i 0. The input distribution
the so-called water-filling allocation [6]. For a vector multiple-access by maximizing i=1
channel, although each user has a different channel and experiences that maximizes this weighted rate sum is known to be a Gaussian
a different interference structure, it is possible to apply single-user distribution. Without loss of generality, let 1 1 1 1 K . The
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY 2004 147
1
+ 1=h2 = K ; if =hi2 < Kl
1
T
maximize Hi Si Hi z K z
pi (8)
=1 i l
p = 0; if =hi2
1 K
i
+ ( 0 01) 1 21 log H S H + S
K K i l (9)
T
j j i i i z
1 log K
+ S 0 12 log jS j T
subchannels. Second, the power allocation among the subchannels
2
maximize Hi Si Hi z z
must be a water-filling power allocation based on the noise-to-channel-
=1 gain ratio in each subchannel. Solving the single-user input optimiza-
tr(S ) P i = 1; . . . ; K
(4)
i
subject to i i tion via water-filling is more efficient than using general-purpose
S 0: i i = 1; . . . ; K: convex programming algorithms, because water-filling takes advan-
The solution to this problem involves two steps. First, since Sz is a 1 log K
+ S 0 12 log jS j
T
= 1 = tr(S ) P i = 1; . . . ; K
(10)
i
the form Sz Q QT , where Q is an orthogonal matrix QQT I,
^=
subject to
1
and is a diagonal matrix of eigenvalues f1 ; ; m g. Define H ...
i i
S 0; i = 1; . . . ; K
1 0 QT H . The objective can then be rewritten as i
^= 6
Next, let H F M T be a singular-value decomposition of H , where ^ Proof: The only if part is easy. Suppose that at the rate-sum op-
timum, there is an Si that does not satisfy the single-user water-filling
F and M are orthogonal matrices, and is a diagonal matrix of sin- 6
f
gular values h1 ; h2 ; ... g
hr , where r is the rank of H . Consider S ^ ^= condition. Fix all other covariance matrices, set Si to be the water-
+
tr( ) = tr( ^)
K
j =1;j 6=i
filling covariance matrix with Sz Hj Sj HjT as noise.
M T SM as the new optimization variable. Since S S , the With all other covariance matrices fixed, the single-user optimization
problem is then transformed into problem for Si differs from the sum rate optimization problem by a
1 log j6S^6 + I j T
constant. Thus, setting Si to be the water-filling covariance matrix
maximize
2 strictly increases the sum rate objective. This contradicts the optimality
subject to tr(S^) P (7) of fSi g. Thus, at the optimum, all Si ’s must satisfy the single-user
^ 0:
S water-filling condition.
148 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY 2004
The if part also holds. The proof relies on standard convex analysis. rate objective differs from the multiuser rate-sum objective by only
The constraints of the optimization problem are such that Slater’s con- a constant, the rate-sum objective is nondecreasing with each water-
dition is satisfied. So, the Karush–Kuhn–Tucker (KKT) condition of filling step. The rate-sum objective is bounded above, so the sum rate
the optimization problem is sufficient and necessary for optimality. To converges to a limit.
derive the KKT conditions, form the Lagrangian The convergence matrices S1 ; . . . ; SK also converge to a limit. Be-
K cause the single-user water-filling matrix is unique, each water-filling
L(Si ; i ; 9i ) = log Hi Si HiT +S z step in the iterative algorithm must either yield a strict increase of the
i=1 sum rate or keep the covariance matrices the same. At the limit, all Si ’s
K K are simultaneously the single-user water-filling covariance matrices of
0 i (tr(Si ) 0 Pi ) + tr(S 9 ):
i i (11) user i with all other users’ signals regarded as additional noise. Then,
i=1 i=1 by Theorem 1, this set of (S1 ; . . . ; SK ) must achieve the sum capacity
The coefficient 1=2 and the constant log jSz j are omitted for simplicity. of the Gaussian vector multiple-access channel.
Here, fi g are the scalar dual variables associated with the power con-
straints, f9i g are the matrix dual variables associated with the positive Note that the proof does not depend on the initial starting point. Thus,
the algorithm converges to the sum capacity from any starting values
of (S1 ; . . . ; SK ). However, although the sum capacity is unique, the
definiteness constraints. The inner product in the space of semidefinite
matrices is the trace of matrix product.
The KKT condition of the optimization problem consists of the con- optimal covariance matrices themselves may not be. It is possible for
dition @L=@Si = 0, the complementary slackness conditions, and the the iterative algorithm to converge to two different sets of covariance
matrices both giving the same optimal sum rate. The following example
illustrates this point. Let H1 = H2 = Sz = I222 , and P1 = P2 = 2.
primal and dual constraints
01
Then, S1 = S2 = I222 , and
K
i I =H T
H j Sj H T
+S z Hi + 9 i
2 0 0 0
i j
S10 = ; S20 =
j =1
tr(S ) = Pi i 0 0 0 2
tr(9 S ) = 0
i i both achieve the same sum capacity.
9 ;S ; 0
i i i (12) Fig. 3 gives a graphical interpretation of the algorithm. The capacity
for all i = 1; . . . ; K . Note that the gradient of log jX j is X 01 .
region of a two-user vector multiple-access channel is shown in
Fig. 3(a). The sum rate R1 + R2 reaches the maximum on the line
Now, the above KKT condition is also valid for the single-user water-
filling problem when K is set to 1. In this case, it is easy to verify
segment between C and D. Initially, the covariance matrices for the
two users, S1 and S2 , are zero matrices.
(0) (0)
that the single-user solution (8), (9) satisfies the condition exactly. But,
for each user i, the multiuser KKT condition and the single-user KKT 1) The first iteration is shown in Fig. 3(b). After a single-user water-
condition differ only by the additional noise term K Hj Sj HjT . filling for S1 , the rate pair (R1 ; R2 ) is at point F. Then, treating
(1)
j =1 ;j 6=i
So, if each Si satisfies the single-user condition while regarding other S1 as noise, a single-user water-filling for S2(1) moves the rate
(1)
users’ signals as additional noise, then collectively, the set of fSi g pair to point E.
must also satisfy the multiuser KKT condition. By the sufficiency of
the KKT condition, fSi g must then be the optimal covariance for the 2) The second iteration is shown in Fig. 3(c). First, note that with
fixed covariance matrices S1 and S2 , the capacity region is
(1) (1)
multiuser problem. This proves the if part of the theorem.
the pentagon abEFO. So, by changing the decoding order of user
IV. ITERATIVE WATER-FILLING 1 and 2, the rate pair can be moved to point b without affecting
the rate sum. Once at point b, water-filling for S1 while treating
(1)
At the rate-sum optimum, each user’s covariance matrix is a water- I (X2 ; Y jX1 ) fixed, thus moving the rate pair to point c.
filling covariance against the combined noise and all other users’ in- 3) The capacity pentagon with (S1 ; S2 ) is now represented by
(2) (1)
terference. This suggests that the set of rate-sum optimal covariance acdeO. So, the decoding order can again be interchanged to
matrices can be found using an iterative procedure. get to the point d. Performing another single-user water-filling
treating S1 as additional noise gives S2 and the corre-
(2) (2)
Algorithm 1: Iterative water-filling
initialize Si = 0, i = 1; . . . K: sponding rate-pair point f in Fig. 3(d). The process continues
repeat until it converges to points C and D.
for i = 1 to K Note that in every step, each user negotiates for itself the best
Sz0 = +S
K
Hj Sj HjT z ; signaling direction as well as the optimal power allocation while
6
= arg max 12 log jH SH + S 0 j;
j =1;j =i
regarding the interference generated by all other users as noise. The
Si i
T
i z iterative water-filling algorithm is more efficient than general-purpose
S
end convex programming routines, because the algorithm decomposes the
until the sum rate converges. multiuser problem into a sequence of single-user problems, each of
which is much easier to solve. Further, in each step, the single-user
water-filling algorithm takes advantage of the problem structure by
Theorem 2: In the iterative water-filling algorithm, the sum rate
converges to the sum capacity, and (S1 ; . . . ; SK ) converges to an performing an eigenmode decomposition. In fact, the convergence is
optimal set of input covariance matrices for the Gaussian vector very fast.
multiple-access channel.
Proof: At each step, the iterative water-filling algorithm finds B. Convergence Behavior
the single-user water-filling covariance matrix for each user while re- The iterative procedure arrives at a corner point of some rate region
garding all other users’ signals as additional noise. Since the single-user pentagon after the first iteration. The following theorem shows that this
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY 2004 149
(a) (b)
(c) (d)
Fig. 3. First two iterations of iterative water-filling algorithm.
corner point is only 1/2 nats per user per output dimension away from where the fact tr(AB ) = tr(BA) is used. The objective of the dual
the sum capacity. program is
Theorem 3: After one iteration of the iterative water-filling g(0; fi g; f9i g) = inf L(fSi g; T; 0; fi g; f9i g): (15)
algorithm, fSi g achieves a total data rate K Ri that is at most
fS g;T
i=1
(K 0 1)m=2 nats away from the sum capacity. At the optimum, @L=@Si = 0. Thus,
Proof: The idea is to form the Lagrangian dual of the original i I = HiT 0Hi + 9i ; i = 1; 2; . . . ; K: (16)
optimization problem, and use the fact that the difference between the
primal and dual objectives, the so-called duality gap, is a bound on the Further, at the optimum, @L=@T = 0. Thus,
@
(0 log jT j + tr(0T )) = 0:
difference between the primal objective and the optimum.
(17)
The first step in deriving the dual problem is to reformulate the op- @T
timization problem (10) in the following equivalent form: This implies that
minimize 0 log jT j
K
T 01 = 0: (18)
subject to T Hi Si HiT + Sz
(13)
i=1
tr(S ) P i i = 1; . . . ; K
i
Therefore,
S 0; i = 1; . . . ; K
K
i
g(0; fi g; f9i g) = log j0j + m 0 tr(0Sz ) 0 i Pi
where again the coefficient 1=2 and the constant log jS j are dropped. z
i=1
The Lagrangian of the new optimization problem is
where m is the number of output dimensions. The dual problem of (13)
L(fSi g; T; 0; fi g; f9i g) is then
K K
The duality gap, denoted as
, is the difference between the objective Now, putting everything together
of the primal problem (13) and the dual problem (19)
K 01 K
K 01 K
Hi Si HiT + Sz Sz i Pi 0 m (30)
Hi Si HiT
= tr +
according to an i.i.d. Gaussian random variable with zero mean and that the line segment between C and D defines a portion of the capacity
unit variance. boundary. If the optimal sum-capacity covariance matrices happen to
In Fig. 5, the points B and E can be found after one iteration of be orthogonal, points C and D collapse to the same point.
water-filling. Let their respective input covariance matrices be SB = A lower bound for the region between B and C (or D and E) can be
(SB;1 ; SB;2 ) and SE = (SE;1 ; SE;2 ). Also, the sum capacity points found based on the linear combination of covariance matrices SB and
C and D are found using iterative water-filling. Denote the sum-ca- SCD (or SE and SCD , respectively). Consider the data rates associated
pacity achieving covariance matrix as SCD = (SCD;1 ; SCD;2 ). Note with the covariance matrices SB + (1 0 )SCD with user 1 decoded
152 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 1, JANUARY 2004
first (or SE + (1 0 )SCD with user 2 decoded first), where (or ) [12] P. Viswanath, D. N. C. Tse, and V. Anantharam, “Asymptotically op-
ranges from 0 to 1. These rates are achievable, so they are lower bounds. timal water-filling in vector multiple-access channels,” IEEE Trans. In-
form. Theory, vol. 47, pp. 241–67, Jan. 2001.
Because the objective is a concave function of the covariance matrices,
[13] P. Viswanath and V. Anantharam, “Optimal sequences and sum capacity
this lower bound is better than the time-sharing of data rates associated of synchronous CDMA systems,” IEEE Trans. Inform. Theory, vol. 45,
with B and C (or D and E). Since the corner points after one iteration pp. 1984–1991, Sept. 1999.
(i.e., B and E) are at most (K 01)m=2 nats away from the sum capacity, [14] S. Ulukus and R. D. Yates, “Iterative signature adaptation for capacity
the lower bound is a close approximation of the capacity region. A maximization of CDMA systems,” in Proc. Allerton Conf. Commica-
tions, Control and Computing, 1998.
typical example is shown in Fig. 5. Extensive numerical simulations [15] C. Rose, S. Ulukus, and R. D. Yates, “Wireless systems and interference
show that the lower bound is fairly tight. An upper bound is also plotted avoidance,” IEEE Trans. Wireless Commun., vol. 1, pp. 415–428, July
by extending the line segments AB, CD, and EF. This is an upper bound 2002.
because the capacity region is convex. [16] M. Médard, “The effect upon channel capacity in wireless communica-
tions of perfect and imperfect knowledge of the channel,” IEEE Trans.
Inform. Theory, vol. 46, pp. 935–946, May 2000.
[17] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, U.K.:
VI. CONCLUSION Cambridge Univ. Press, 1990.
[18] S. Boyd and L. Vandenberghe, “Convex Optimization,” Cambridge
This correspondence addresses the problem of finding the optimal Univ. Press, Cambridge, U.K., available at http://www.stanford.edu/
transmitter covariance matrices that achieve the sum capacity in a ~boyd/cvxbook.html, 2003.
Gaussian vector multiple-access channel. The computation of the [19] L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev.,
vol. 38, no. 1, pp. 49–95, Mar. 1996.
sum capacity is formulated in a convex optimization framework. A
multiuser water-filling condition for achieving the sum capacity is
found. It is shown that the sum-rate maximization problem can be
solved efficiently using an iterative water-filling algorithm, where
each step of the iteration is equivalent to a local maximization of
one user’s data rate with multiuser interference treated as noise. The
iterative water-filling algorithm is shown to converge to the sum Maximizing the Spectral Efficiency of Coded CDMA
capacity from any starting point. The convergence is fast. In particular, Under Successive Decoding
it reaches within 1/2 nats per user per output dimension from the sum
capacity after just a single iteration. As mentioned before, the vector Giuseppe Caire, Senior Member, IEEE,
channel model discussed in this correspondence includes ISI channels Souad Guemghar, Student Member, IEEE,
and fading channels as special cases. Thus, the iterative water-filling Aline Roumy, Member, IEEE, and Sergio Verdú, Fellow, IEEE
algorithm can also be used to efficiently compute the power allocation
across the frequency spectrum for an ISI channel or over time for a
Abstract—We investigate the spectral efficiency achievable by random
fading channel. synchronous code-division multiple access (CDMA) with quaternary
Finally, although the iterative water-filling algorithm solves the sum phase-shift keying (QPSK) modulation and binary error-control codes, in
capacity problem efficiently, it does not directly apply to other points the large system limit where the number of users, the spreading factor, and
in the capacity region. The computations of other capacity points are the code block length go to infinity. For given codes, we maximize spectral
also convex programming problems. However, how to best exploit the efficiency assuming a minimum mean-square error (MMSE) successive
stripping decoder for the cases of equal rate and equal power users. In
problem structure in these cases is still not clear. both cases, the maximization of spectral efficiency can be formulated
as a linear program and admits a simple closed-form solution that can
be readily interpreted in terms of power and rate control. We provide
REFERENCES examples of the proposed optimization methods based on off-the-shelf
low-density parity-check (LDPC) codes and we investigate by simulation
[1] R. Ahlswede, “Multi-way communication channels,” in Proc. 2nd Int. the performance of practical systems with finite code block length.
Symp. Information Theory, 1971, pp. 103–105.
[2] H. Liao, “Multiple access channels,” Ph.D. dissertation, Univ. Hawaii, Index Terms—Channel capacity, code-division multiple access (CDMA),
1972. low-density parity-check (LDPC) codes, multiuser detection, quaternary
[3] R. G. Gallager, “Energy limited channels: Coding, multiaccess and phase-shift keying (QPSK) modulation, successive decoding.
spread spectrum,” in Proc. Conf. Information Science and Systems
(CISS), Mar. 1988, p. 372.
[4] S. Verdú, “On channel capacity per unit cost,” IEEE Trans. Inform. I. INTRODUCTION
Theory, vol. 36, pp. 1019–1030, Sept. 1990.
[5] R. G. Gallager, “A perspective on multiaccess channels,” IEEE Trans. All points in the capacity region of the scalar Gaussian multiple-ac-
Inform. Theory, vol. IT-31, pp. 124–142, Mar. 1985. cess channel are achievable by successive single-user encoding,
[6] T. Cover and J. A. Thomas, Elements of Information Theory. New
York: Wiley, 1991.
[7] R. S. Cheng and S. Verdú, “Gaussian multiaccess channels with ISI: Ca-
pacity region and multiuser water-filling,” IEEE Trans. Inform. Theory, Manuscript received September 11, 2002; revised August 22, 2003. The ma-
vol. 39, pp. 773–785, May 1993. terial in this correspondence was presented at the 39th Allerton Conference,
[8] R. Knopp and P. A. Humblet, “Information capacity and power control in Monticello, IL, October 2001.
single-cell multi-user communications,” in Proc. IEEE Int. Conf. Com- G. Caire and S. Guemghar are with the Eurecom Institute, 06904 Sophia-
munications (ICC), 1995, pp. 331–335. Antipolis, France (e-mail: Giuseppe.Caire@eurecom.fr; Souad.Guemghar@
[9] D. N. C. Tse and S. Hanly, “Multi-access fading channels: Part I: Poly- eurecom.fr).
matroid structure, optimal resource allocation and throughput capaci- A. Roumy is with IRISA-INRIA, 35042 Rennes, France (e-mail: aline.
ties,” IEEE Trans. Inform. Theory, vol. 44, pp. 2796–2815, Nov. 1998. roumy@irisa.fr).
[10] W. Yu, “Competition and Cooperation in multi-user communication en- S. Verdú is with the Department of Electrical Engineering, Princeton Univer-
vironments,” Ph.D. dissertation, Stanford Univ., Stanford, CA, 2002. sity, Princeton, NJ 08544 USA (e-mail: verdu@princeton.edu).
[11] L. Vandenberghe, S. Boyd, and S.-P. Wu, “Determinant maximization Communicated by V. V. Veeravalli, Associate Editor for Detection and Esti-
with linear matrix inequality constraints,” SIAM J. Matrix Anal. Applic., mation.
vol. 19, pp. 499–533, 1998. Digital Object Identifier 10.1109/TIT.2003.821970