Sie sind auf Seite 1von 6

Globecom 2012 - Wireless Communications Symposium

Multi-Level Compress and Forward Coding for


Half-Duplex Relays
Jaweria Amjad, Momin Uppal , and Saad Qaisar
Dept of Electrical Engineering, NUST School of Electrical Engineering & Computer Sciences, Pakistan
Dept of Electrical Engineering, LUMS School of Science and Engineering, Pakistan
Email: {jaweria.amjad,saad.qaiser}@seecs.edu.pk, momin.uppal@lums.edu.pk

AbstractThis paper presents a multi-level compress and


forward coding scheme for a three-node relay network in which
all transmissions are constrained to be from an M -ary PAM
constellation. The proposed framework employs a uniform scalar
quantizer followed by Slepian-Wolf coding at the relay. We first
obtain a performance benchmark for the proposed scheme by
deriving the corresponding information theoretical achievable
rate. A practical coding scheme involving multi-level codes is
then discussed. At the source node, we use multi-level lowdensity parity-check codes for error protection. At the relay node,
we propose a multi-level distributed joint source-channel coding
scheme that uses irregular repeat-accumulate codes, the rates of
which are carefully chosen using the chain rule of entropy. For
a block length of 2 105 symbols, the proposed scheme operates
within 0.56 and 0.63 dB of the theoretical limits at transmission
rates of 1.0 and 1.5 bits/sample, respectively.

I. I NTRODUCTION
ODING schemes for the three-node relay network can be
broadly classified into the Decode and Forward (DF),
Amplify and Forward (AF), and the Compress and Forward
(CF) categories [1]. In DF, relay decodes before forwarding it
to the destination. Its performance is dictated by the quality
of the source to relay channel; a decoding failure at the relay
results in poor performance overall. On the other hand, in
AF and CF, the relay does not attempt to decode. In AF, the
signal received at the relay is simply amplified before being
transmitted to the destination. On the other hand, in CF, the
relay compresses the signal that it receives by exploiting its
correlation with the signal received at the destination. The
compressed signal is then transmitted to the destination. Unlike
DF, CF always outperforms direct transmission regardless of
the source to relay channel quality, and has been shown to
have optimal asymptotic performance for cooperative ad-hoc
networks [2].
There are only a few existing works on practical CF coding
schemes in the literature. A CF coding scheme over a halfduplex Gaussian relay channel was first proposed in [3], [4]. A
fixed-rate CF relaying scheme using low-density parity-check
(LDPC) and irregular repeat-accumulate (IRA) codes with
binary modulation for the half-duplex Gaussian relay channel
was presented in [5]. It was shown that for practical purposes,
a binary quantizer at the relay was sufficient to obtain nearoptimal performance. The scheme was later extended to fading
relay channels under a rateless coded setting in [6]. A CF
coding scheme that implemented Slepian Wolf (SW) coding
[7] at the relay using polar codes was presented in [8]. A CF

978-1-4673-0921-9/12/$31.00 2012 IEEE

strategy named Quantize Map and Forward was proposed in


[9] in which the quantized indices were mapped on to random Gaussian codewords. A practical implementation of this
scheme using LDPC codes was presented in [10]. A variation
of [9] was proposed in [11] in which the authors used vector
quantizers for compression instead of scalar quantization. To
the best of our knowledge, none of the existing works discuss
a multilevel scheme for relays.
In this paper, we propose a multi-level CF (ML-CF) coding
scheme for a half-duplex Gaussian relay channel where all
transmissions (from both the source and the relay) are constrained to be from an M -ary PAM constellation. The scheme
utilizes uniform scalar quantization (USQ) followed by SW
coding for compression of the quantization indices at the relay.
We first present the information theoretic achievable rates
under the M -ary constellation constraint, which serve as a
performance benchmark for our subsequent code designs. With
the help of numerical results, we demonstrate that a quantizer
with M levels suffices for an M -ary PAM constellation.
Since the quantization indices need to be compressed, as
well as transmitted over a noisy relay-to-destination link, we
propose a multi-level distributed joint source channel coding
(ML-DJSCC) strategy, implemented with the help of IRA
codes, to provide joint compression and error protection to
the quantization indices. The rates of the individual IRA
codes are carefully chosen using chain rule of entropy. For
transmissions from the source, we employ multi-level LDPC
codes to provide error protection. The degree distributions
for the IRA and the LDPC codes are optimized using the
EXIT chart strategy [12] and the Gaussian assumption [13].
Simulations using optimized codes with a block length of
2 105 symbols show a performance gap of only 0.56 and
0.63 dB from the theoretical limits at transmission rates of 1.0
and 1.5 bits/sample (b/s), respectively.
II. S YSTEM M ODEL
We consider a three-node relay network consisting of a
source, relay, and destination node. Let dsd , dsr and drd be
the source-destination, source-relay, and relay-destination link
distances, respectively. Throughout the paper, we assume that
the distance dsd is fixed while the other two are variable.
The exact value at which dsd is fixed is not important since
the same channel coefficients can be obtained by scaling the
distances appropriately. However, for expositional clarity, we

4536

assume that dsd = 1 meters (m). The corresponding (real)


channels suffer path-loss, with the channel gains given as
csd = 1, csr = (dsd /dsr )3/2 and crd = (dsd /drd )3/2 .
We assume the presence of global channel state information,
i.e. each node is assumed to be aware of all three channel
coefficients. All channels are assumed to have additive white
Gaussian noise (AWGN), the variance of which we assume,
without loss in generality, to be unity. The transmissions from
the source as well as the relay are assumed to be modulated
on to an M -ary PAM constellation.1 Both the source and the
relay are assumed to have an average power constraint of
Ps and Pr , respectively. Owing to the half-duplex nature of
the relay node, the total transmission period of N symbols is
divided into the relay receive period (denoted as T1 ) of length
N symbols, and the relay transmit period (denoted as T2 )
of length
N symbols, where [0, 1] is the half-duplex
time sharing constant and
= 1 . Throughout the rest
of the paper, we will denote sequences with boldface and the
associated random variables with italic letters. All logarithms
used in the paper are to the base 2.

is transmitted. The destination receives the superposition of


both signals:
Yd2 = csd Xs2 + crd Xr + Zd2 ,
where Zd2 once again is an i.i.d. zero-mean, unit-variance
Gaussian noise sequence.
The destination first attempts to recover the quantization
indices W by treating the transmission Xs2 from the source
as interference. It can do so if [5], [7]
H (W | Yd1 )
I (Yd2 ; Xr )

(1)

The information term on the right hand side of (1) is


the constrained capacity of an AWGN channel with an M ary input and an M -ary interference and can be computed
numerically as (assuming noise to be of unit-variance)
!
M
M Z
X
i (y)
1 X
i (y) log
dy (2)
C(S, I) = m
M i=1
(y)
j=1 j
with
i (y) =

III. CF R ELAYING AND P ERFORMANCE B OUNDS

M
X

1
fg (y xi S xk I).
M

k=1

The source partitions its message into m = log M streams


and encodes each stream with a separate length-N LDPC code.
The individual rates of these LDPC codes are denoted as R1 ,
. . . , RP
m with the overall transmission rate in b/s given as
m
R = i=1 Ri . The LDPC codewords are then modulated to
an M -ary PAM constellation to obtain N symbols. The first
N symbols denoted by Xs1 are transmitted during T1 subject
2
] Ps1 , where Xs1 is the random
to a power constraint E[Xs1
variable associated with the independent and identically distributed (i.i.d.) sequence Xs1 . The remaining
N symbols are
2
transmitted during T2 and satisfy the constraint E[Xs2
] Ps2 .
Due to the average power constraint at the source, we have
Ps1 +
Ps2 Ps . The length-N signal sequences received
at the relay and destination during T1 are given as
Yr = csr Xs1 + Zr and Yd1 = csd Xs1 + Zd1 ,
respectively, where Zr and Zd1 are i.i.d. zero-mean unitvariance Gaussian noise sequences. The relay quantizes Yr
using an L-level USQ to obtain a sequence W of quantization
indices. Let k0 , . . . , kL be the quantization region boundaries.
If q is the quantization step size, we have k0 = , ki =
i L2 q for i = 1, . . . , L 1, and kL = +. The quantizer
output W = w {0, . . . , L 1} if the received signal
Yr {x : x R, kw x < kw+1 }. The quantization indices
are SW coded with Yd1 as the decoder side-information
and provided error protection to form a length-
N codeword
sequence Xr drawn from an M -ary PAM constellation subject
to a power constraint E[Xr2 ] Pr /
. Note that since the
relay does not transmit anything during T1 , normalizing the
power constraint by
makes sure that the average power
consumption at the relay is Pr . The codeword Xr is then
transmitted to the destination during T2 , the same time as Xs2
1 Since an M ary PAM represents a quadrature component of an M M
QAM, the results presented in this paper can be easily extended to the case
of QAM as well.

Here xi represent i-th point of the unit energy M ary PAM


constellation, S and I are the received source and interferer
powers respectively and fg (z) is the zero mean, unit variance
Gaussian probability density function evaluated at z. When
I = 0, (2) reduces to the constrained capacity of an AWGN
channel with an M -ary input.
After recovering W (and consequently Xr ), the destination
cancels the interference caused by Xr and attempts to recover
the source message jointly from Yd1 , Yd2 and W. The
destination is capable of recovering the source message if the
transmission rate satisfies
R

I (W, Yd1 ; Xs1 ) +


I (Xs2 ; Yd2 |Xr ) .

(3)

The information terms in (3) can be evaluated numerically


using (2) and the conditional probability density function
f (yr |yd1 ); we leave out the details because of space limitations. It should be noted that the achievable rate expression
in (3) corresponds to a particular power allocation Ps1 , Ps2 ,
the quantization parameters L and q, and the half-duplexing
parameter that satisfy the constraint (1) and the average
power constraint Ps1 +
Ps2 Ps . Thus one needs to search
over these parameters to maximize the achievable rate under
the given constraints.
A. Numerical Results and Observations
In Fig. 1, we present the achievable rates of the CF strategy
versus Ps where all transmissions from the source as well as
the relay are modulated onto an M = 4-ary PAM constellation.
In generating the results, we assume that L = M = 4, and
numerically search over , q, and the power allocation between
Ps1 and Ps2 that yield the maximum achievable rate in (3)
while satisfying the constraint (1) (in addition to the constraint
Ps1 +
Ps2 Ps ). Some observations that can be made from
these numerical results are:

4537

Fig. 2. A comparison of CF achievable rates for several L. The parameters


are set to M = 8, Pr = -3 dB, drd = 0.15 m, and dsr = 0.95 m.

IV. P RACTICAL ML-CF R ELAYING S YSTEM


Whereas in the past, researchers focused on minimizing
Euclidean distance and maximizing asymptotic gains, recent
research in the domain of coded modulation has proved
that schemes like multi-level coding (MLC) [14] and bitinterleaved coded modulation (BICM) [15] can achieve capacity while providing both power and bandwidth efficiency.
In particular, multi-stage decoding (MSD) where each
code/level is decoded individually instead of jointly has been
shown to approach channel capacity with limited complexity [16]. Thus, we implement the practical ML-CF relaying
scheme with the help of multi-level LDPC codes to encode the
message at the source, and multi-level IRA codes to provide
joint compression and error protection at the relay. In the
following subsections, we will describe our ML-CF coding
scheme in detail.

message

At an overall transmission rate of 1.0 b/s, the CF strategy


outperforms DF by a margin of 1.15 dB, whereas the gain
from direct transmission is approximately 1.2 dB. Note that
in order to have a fair comparison, we have assumed that
for the direct transmission case, the source transmits with a
power equal to Ps + Pr .
It was shown in [5] that a binary quantizer (L = 2, q = )
sufficed for the case when transmission from the source and
relay were modulated on to a binary constellation (M = 2),
i.e. no significant gains were observed with L > 2. Results
in Fig. 1 however indicate that a binary quantizer does
not suffice for higher order constellations. For example, for
M = 4, we observe that in order to achieve a transmission
rate of 1.5 b/s, employing a binary quantizer at the relay
requires 0.89 dB more transmission power than the case with
L = 4. At the same time, our numerical results (not shown
in the figure) seem to indicate that going beyond L = 4 does
not yield any noticeable gains.
A similar observation is made for the case of M = 8. As
shown in Fig. 2, achieving a transmission rate of 2.0 b/s with
a binary quantizer requires 1.2 dB more power at the source
than the case with L = 4; the L = 4 quantizer on the other
hand requires 0.36 dB more power than the case with L = 8.
Weve also found numerically that going beyond L = 8 in
this case does not give any further noticeable gains.

LDPC Code
1

P2S

LDPC Code
2

P2S

Based on these observations, as well as those from [5],


we can reasonably conclude that an M -level quantizer should
suffice for the case when the transmissions from the source
and relay are modulated onto an M -ary PAM constellation.
An intuitive explanation behind this notion is the fact that the
number of bits per quantization index for an M -level quantizer
is log M , the same as the maximum channel capacity (in b/s)
on the relay to destination link over which the quantization
indices need to be transmitted. Thus, it should be sufficient
to employ an M -level quantizer when the relay employs an
M -ary PAM constellation.

Xs1

Xs2

LDPC Code
m

Fig. 1. CF achievable rates versus the source power Ps for M = 4 and


their comparison with DF and direct transmission rates. The other parameters
are set to Pr = -6 dB, drd = 0.2 m, and dsr = 0.95 m.

Ps1

P2S

M-PAM
Modulator

Ps 2

Fig. 3. Encoding at the Source node using m LDPC codes. The P2S blocks
indicate parallel to serial conversion.

A. Message Encoding
A block diagram of multi-level message encoder at the
Source node is shown in Fig. 3. The message to be transmitted
to the destination is partitioned into m bit-streams. Each bitstream is encoded with a length-N LDPC code of rate Ri ,
i = 1, . . . , m so that the length of the ith message bit-stream
is N Ri . The sum of the individual
Pmcode rates gives the overall
transmission rate in b/s, i.e.
i=1 Ri = R. The resulting
codewords are serially fed m bits at a time into a unit energy

4538

 1N

Yd1

Pr

P2S

1N
W1

.N

1N

P2S

W
W
i 1
1

Xr1

IRA

LLR Calculation
1N

P2S

M-PAM
Modulator

systematic
nodes(.N)
D

2N

Xr2
=

2N

P2S

IRA

Scalar
Quantizer

P2S

Yr

.N

W2

Pr

P2S

2N

M-PAM
Modulator

 2N

check
nodes

parity
nodes(miN)

Le
mN

Pr

P2S

Wl

Xrl

Yd2

IRA

 mN

Fig. 4.

mN

P2S

.N

Soft Demodulation

 mN

P2S

M-PAM
Modulator

Fig. 5.

DJSCC decoder for the i th quantization bit plane.

Multilevel DJSCC encoding at the relay using l IRA codes.

M -PAM modulator, i.e. the k th bit of all codewords forms


the k th symbol of the PAM sequence, k = 1, . . . , N
. The
first N symbols of this PAM sequence are scaled by Ps1
to form the sequence Xs1 that satisfies the average power
constraint of Ps1 , and which is transmitted to the relay and the
destination
during T1 . The remaining
N symbols are scaled

by P s2 to form the sequence Xs2 that satisfies an average


power constraint of Ps2 and which is transmitted during T2 .
B. Multilevel Distributed Joint Source Channel Coding
As mentioned earlier, the relay quantizes the received
sequence Yr using an L-level quantizer, the quantization
step-size of which is chosen to maximize the overall
transmission rate in (3). The quantizer outputs a sequence
W consisting of N quantization indices, each of length
l = log L bits. The sequence W now needs to be compressed
using SW coding with Yd1 as the decoder side-information.
At the same time, channel coding is also required to protect
its transmission against noise on the relay to destination link.
Instead of providing separate SW and channel coding, we
resort to Distributed Joint Source Channel Coding (DJSCC)
[5] in which SW coding and error protection is implemented
in a joint manner. The challenge however is that for L > 2,
each quantization index is composed of l > 1 bits as opposed
to the binary case in [5] where the quantization indices were
one-bit each. Taking this into account, we propose to use
multiple binary IRA codes to implement multi-level DJSCC,
the details of which are given below.
Encoding: We split the quantization index sequence W into l
bit-plane sequences W1 , . . . , Wl each being of length N ;
W1 corresponds to a sequence comprising of the leastsignificant bits of the original quantization sequence, whereas
Wl corresponds to the most-significant bits. One possibility
could have been to encode each one of the l quantization
bit-plane with m IRA codes, the parity bits of which are
then mapped to an M -PAM constellation (similar to Fig. 3).
However, this approach requires the use of l m IRA codes,

which becomes prohibitive when both l and m are large.


Instead, we use a single IRA code for each bit-plane as shown
in Fig. 4. Bit-plane i, i = 1, . . . , l is encoded with a code that
has mi N parity bits (the appropriate choice of i s would be
discussed later). These parity bits are then mapped m bits at a
time to a length-i N symbol sequence Xri (with an average
power Pr /
. These symbol sequences are transmitted to the
destination one after the other, starting with Xr1 and ending
with Xrl . Since the total number of transmissions
Pl from the
relay are
N symbols, we have the constraint i=1 i =
.
Decoding: Note that the systematic bits of the IRA code are
not transmitted over the physical channel. However since Yd1
at the destination is correlated with W, one can think of the
systematic bits as being transmitted over a virtual correlation
channel with Yd1 as the output. The decoding of the bitplanes is done in stages starting with W1 and ending with
Wl . Thus when attempting to decode Wi , the calculation of
log-likelihood ratios (LLRs) corresponding to the systematic
1, . . . , W
i1 , the decoded
bits not only use Yd1 but also W
versions of the respective quantization index bit-planes from
the previous stages, as shown in Fig. 5. This decoding strategy
follows directly from the chain-rule of entropy, using which
the information theoretic constraint (1) necessary for recovery
of the quantization index sequence W can be rewritten as

l
X
i=1

H (Wi |Yd1 , W1 , . . . , Wi1 )

l
X

i I (Xr ; Yd2 ) (4)

i=1

For each bit-plane i, i = 1, . . . , l, we choose i to satisfy the


individual constraint
H (Wi |Yd1 , W1 , . . . , Wi1 ) i I (Xr ; Yd2 ) .

(5)

This makes sure that the overall conditional entropy of W


given Yd1 satisfies (4). Thus if the codes used are capacity
achieving, the proposed methodology should guarantee that
the quantization index sequence W is recoverable.
Noting that multiple parity-bits of an IRA code are mapped
to the same modulated symbol, we employ an iterative softdemodulation strategy [15]. The iterative strategy for recovering the quantization indices is summarized below:

4539

1: Repeat for all bit-planes i = 1, . . . , l.


2: Initialize extrinsic LLRs (from parity nodes to the soft

demodulator) Le = 0.
1, . . . , W
i1 to calculate the a-priori
3: Use Yd1 and W
LLRs for the systematic nodes.
4: While (stopping criterion not met)2
4:
(Soft demodulation) Use Yd2 and Le to calculate the
a-priori LLRs for the parity nodes.
5:
Run one iteration of belief propagation (BP) algorithm
on the IRA decoding graph (Le is updated).
6: end while
i by hard-thresholding the a-posteriori LLRs
7: Obtain W
from the systematic nodes.
After decoding all bit-planes, the DJSCC decoder passes the
1, . . . , W
l to the message decoder.
estimates W
C. Message Decoding
Destination uses MSD on the m LDPC decoding graphs to
recover the corresponding codewords, and hence the original
message sequence. We have two types of variable nodes at
each stage of the LDPC decoder. The first type correspond
to the symbols received during T1 . The a-priori LLRs for
1, . . . , W
l,
these type of nodes are calculated using Yd1 , W
and the decoded codewords corresponding to the previous
stages. The second type of variable nodes are those which
correspond to T2 . For these nodes Yd2 and the decoded
codewords corresponding to the previous stages are used to
evaluate the a-priori LLRs.
D. Code Design
For optimizing the degree distributions of the LDPC and
IRA codes, we first use the information-theoretic analysis of
Section III to evaluate (for given relay position and power Pr )
the optimum parameters , q, Ps1 and Ps2 that are required
to achieve a target transmission rate of R b/s. The analysis
also yields the target rates Ri , i = 1, . . . , m for the individual
LDPC codes, as well as i , i = 1, . . . , l, that govern the target
rates of the individual IRA codes.
The degree distributions for the multi-level LDPC and IRA
codes are designed using the EXIT chart strategy [17] with
Gaussian approximation [13]. The approach we use is a direct
consequence of chain rule of mutual information/entropy, i.e.,
we assume perfect knowledge of prior bit-planes, and no
information about subsequent ones. This simplifies the design
process in the sense that the individual codes can be designed
one by one, in a serial fashion. For each LDPC level, we take
into account the fact that there are two types of variable nodes
with different SNR characteristics: the first type corresponding
to T1 and the other to T2 . The design process is similar to the
one in [5], so we do not include details in this paper. In the
following, we briefly explain how degree distributions of a
single level of IRA codes are optimized; the procedure has
to be repeated for all l levels. Each IRA code has two types
of variable nodes, the systematic and the parity nodes. The

Ich

I ipo1 d

Iciop

IV:F

IF:V

i
pod

Idi op

I ipoc

I ipo1 d
systematic nodes

Fig. 6.

check nodes

soft
demodulator

parity nodes

Information flow for IRA codes.

parity nodes are divided into m groups; corresponding bits


from each group map to the same M -ary symbol. As shown
in Fig. 5 each parity node is connected to two consecutive
check nodes and vice-versa, with as many parity nodes as the
check nodes. Thus the check nodes can also be divided into m
types. We fix the check nodes within group i, i = 1, . . . , m,
to have a regular degree of di , and then
the systematic
PD design
j d1
node degree distribution (x) =
, where D
d=1 d x
is the maximum systematic node degree. Since each group
contains equal number of check nodes, theP
overall check node
m
di 1
degree distribution is given as (x) =
with
i=1 i x
P
1
m
i = d i
. Let Isc be the a-priori information
j=1 dj
from the systematic nodes to the check nodes as shown in
i
is the information flow from the parity to
Fig. 6. If Ipc
check nodes in group i, the information from check to parity
nodes can be evaluated using the approximate check to bitnode duality[12] as


i
1
1
Icp 1 J di J (1 Isc ) + J (1 Ipc )
(6)
where J() is the information that a log-likelihood ratio
drawn from a Gaussian distribution of mean and variance 2 conveys about the bit it represents. The information
from the parity nodes in group i
to the soft demodulator is
j
i
i
given as Ipd
= J 2J 1 Icp
. For informations Ipd
,
j = 1, . . . , m, and given channel conditions, we evaluate the
EXIT function of the soft demodulator using Monte-Carlo
i
and consequently
simulations [18] to obtain Idp




i
i
i
Ipc
= J J 1 Icp
+ J 1 Idp
(7)
for all i = 1, . . . , m. This completes one iteration of decoding
from the check- to parity- to soft demodulator back to parityto check-node. In order to simplify the optimization process,
we assume that the iterations on this side of the decoding graph
(right hand side of the check node in Fig. 6) continue until
a fixed point is reached. In other words, for a given Isc ,
we initially assume Ipc to be zero, and then continue the
iterations specified by (6) and (7) until the point where all
1
m
i
Ipc
, . . . , Ipc
converge to fixed values. If Ipc
denotes the
fixed point, the check-nodes to systematic node information is
given as

m
X
Ics 1
J (di 1)J 1 (1 Isc )
(8)

2 In

our simulations, we stop when either the maximum number of iterations


have exceeded or the correct codeword has been decoded.

4540

i=1

+2J

i
1 Ipc



For convergence of the IRA code at the j-th level, we need to


satisfy the constraint


D
X
jd J (d 1)J 1 (Ics ) + J 1 (Ich ) > Isc (9)
d=1

for all Isc [0, 1), where Ich is the information on


the systematic nodes obtained from the (virtual correlation)
channel. When designing the i-th level IRA code, we have
Ich = I (Wi ; Yd1 |W1 , . . . , Wi1 ). Note that Ics in (9) is
in fact a function of Isc , but we omit that dependence for
notational convenience.
In addition to (9), we have the trivial
PD
constraints i=1 i = 1, and i > 0 for all i = 1, . . . , D. Under these constraint, the IRA code rate needsj to be maximized
PD
which is equivalent to maximizing d=1 dd . By discretizing
Isc on the interval [0, 1), the optimization can be easily
solved using linear programming.
V. S IMULATION R ESULTS
In this section, we present simulation results for a 4-PAM
CF relaying system with transmission rates of 1.0 and 1.5
b/s. The setup we consider corresponds to drd = 0.2 m,
dsr = 0.95 m and Pr = 6 dB. We list the optimized
information theoretic parameters required to achieve the target
transmission rates in Table I. Note that if the above information

dsr

TABLE I
O PTIMIZED PARAMETERS FOR 4-PAM CF RELAYING WITH
= 0.95, drd = 0.2 M AND Pr = 6 D B. A LL POWERS ARE SPECIFIED
IN D B AND RATES IN B / S .
Rate
1.0
1.5

0.57
0.65

0
0.26
0.28

1
0.17
0.09

Ps1
4.63
8.65

Ps2
2.61
7.25

q
1.55
2.44

R1
0.63
0.37

R2
0.83
0.67

theoretic parameters are used to design the LDPC and IRA


codes, the practical coding losses would imply that the rates
of the optimized codes are less than those required. Therefore
we keep , Ps2 , 0 , and 1 fixed at the theoretical values
and increase Ps1 gradually until codes of required rates are
achieved. The power Ps2 is not increased from its theoretical
minimum so as not to increase the interference that Xs2 causes
while decoding W [5].
We simulate the optimized degree distributions at a finite
block-length of N = 2 105 symbols. In this case too we fix
all parameters except Ps1 to the information theoretic values
and gradually increase this power until the desired bit-error
rate (BER) of 105 is achieved. Simulation results indicate
that the overall power Ps required to achieve the target BER
at a transmission rate of 1.0 b/s is only 0.56 dB more than
the theoretical limit. On the other hand, at a transmission rate
of 1.5 b/s, the proposed ML-CF scheme suffers a loss of only
0.63 dB from the theoretical bound.
VI. C ONCLUSIONS
We have presented an ML-CF strategy for a half-duplex
Gaussian relay network where the transmissions from the
source and the relay are drawn from an M -ary PAM constellation. The compression of the signal received at the relay

is achieved by quantizing it before applying SW compression.


Numerical evaluation of information theoretic analysis indicates that it is sufficient to consider an M -level quantizer,
i.e. one does not gain much by going beyond M levels.
At the same time, one suffers a significant degradation in
performance by considering less than M levels. A coding
scheme using LDPC and IRA codes was also presented. Multilevel LDPC codes were used to encode the source message,
whereas multiple IRA codes were used to implement multilevel DJSCC of the quantization indices. Simulation of the
proposed methodology indicates performance close to the
theoretical bound.
R EFERENCES
[1] T. Cover and A. Gamal, Capacity theorems for the relay channel,
Information Theory, IEEE Transactions on, vol. 25, no. 5, pp. 572
584, sep 1979.
[2] A. Host-Madsen, Capacity bounds for cooperative diversity, Information Theory, IEEE Transactions on, vol. 52, no. 4, pp. 1522 1544, april
2006.
[3] Z. Liu, V. Stankovic, and Z. Xiong, Practical compress-forward code
design for the half-duplex relay channel, in Proc. Conf. on Inf. Sci. and
Systems, 2005.
[4] , Wyner-ziv coding for the half-duplex relay channel, in Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP05).
IEEE International Conference on, vol. 5. IEEE, 2005, pp. v1113.
[5] M. Uppal, Z. Liu, V. Stankovic, and Z. Xiong, Compress-forward
coding with bpsk modulation for the half-duplex gaussian relay channel,
Signal Processing, IEEE Transactions on, vol. 57, no. 11, pp. 4467
4481, nov. 2009.
[6] M. Uppal, G. Yue, X. Wang, and Z. Xiong, A rateless coded protocol for half-duplex wireless relay channels, Signal Processing, IEEE
Transactions on, vol. 59, no. 1, pp. 209 222, jan. 2011.
[7] D. Slepian and J. Wolf, Noiseless coding of correlated information
sources, Information Theory, IEEE Transactions on, vol. 19, no. 4, pp.
471 480, jul 1973.
[8] R. Blasco Serrano, Coding strategies for compress-and-forward relaying, Licentiate Thesis, Royal Institute of Technology (KTH), Stockholm, Sweden, 2010.
[9] A. Avestimehr, S. Diggavi, and D. Tse, Wireless network information
flow: A deterministic approach, Information Theory, IEEE Transactions
on, vol. 57, no. 4, pp. 1872 1905, april 2011.
[10] V. Nagpal, I.-H. Wang, M. Jorgovanovic, D. Tse, and B. Nikolic and,
Quantize-map-and-forward relaying: Coding and system design, in
Communication, Control, and Computing (Allerton), 2010 48th Annual
Allerton Conference on, 29 2010-oct. 1 2010, pp. 443 450.
[11] S. H. Lim, Y.-H. Kim, A. El Gamal, and S.-Y. Chung, Noisy network
coding, Information Theory, IEEE Transactions on, vol. 57, no. 5, pp.
3132 3152, may 2011.
[12] A. Ashikhmin, G. Kramer, and S. ten Brink, Extrinsic information
transfer functions: model and erasure channel properties, Information
Theory, IEEE Transactions on, vol. 50, no. 11, pp. 26572673, 2004.
[13] S. Chung, T. Richardson, and R. Urbanke, Analysis of sum-product
decoding of low-density parity-check codes using a gaussian approximation, Information Theory, IEEE Transactions on, vol. 47, no. 2, pp.
657670, 2001.
[14] H. Imai and S. Hirakawa, A new multilevel coding method using errorcorrecting codes, Information Theory, IEEE Transactions on, vol. 23,
no. 3, pp. 371 377, may 1977.
[15] G. Caire, G. Taricco, and E. Biglieri, Bit-interleaved coded modulation, Information Theory, IEEE Transactions on, vol. 44, no. 3, pp.
927 946, may 1998.
[16] U. Wachsmann, R. Fischer, and J. Huber, Multilevel codes: theoretical
concepts and practical design rules, Information Theory, IEEE Transactions on, vol. 45, no. 5, pp. 1361 1391, jul 1999.
[17] S. Ten Brink, Designing iterative decoding schemes with the extrinsic
information transfer chart, AEU Int. J. Electron. Commun, vol. 54, no. 6,
pp. 389398, 2000.
[18] C. Wang, S. Kulkarni, and H. Poor, Density evolution for asymmetric memoryless channels, Information Theory, IEEE Transactions on,
vol. 51, no. 12, pp. 42164236, 2005.

4541

Das könnte Ihnen auch gefallen