Sie sind auf Seite 1von 15

Modulation and Coding Project:

DVB-S2 communication chain

Quentin Bolsee Huong Nguyen Ilias Fassi-Fihri

May 18, 2016

1
1 Optimal communication chain over the ideal channel

1.1 Steps

1.1.1 Symbol mapping/demapping

The data is a randomly generated vector of 0/1:


b i t s t x = r a n d i ( [ 0 , 1 ] , Nbits , 1 ) ;

It can then be mapped/demapped to/from symbols using the provided functions. Care should be taken
that Nbits is a multiple of the number of bits per symbol, otherwise data cannot be translated to an
integer number of symbols. Some constellations are shown in figure 1.

0.8
0.6
0.6
0.4
Imaginary part of symbol

Imaginary part of symbol


0.4

0.2
0.2

0 0

-0.2
-0.2

-0.4
-0.4
-0.6
-0.6
-0.8

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 -0.5 0 0.5 1
Real part of symbol Real part of symbol

(a) 2-PAM (BPSK) (b) 16-QAM

Figure 1: Constellation examples obtained with the mapping function.

1.1.2 halfroot Nyquist filter

Once the mapping/demapping was set-up, the Nyquist filter needed to be designed and applied on the
signal to complete the baseband model.

Half-root Nyquist filter


5 1
H rrc (t)
4.5
H rc (t)
0.8
4

3.5
0.6
3
|Hrrc(f)|

H(t)

2.5 0.4

2
0.2
1.5

1
0
0.5

0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -6 -4 -2 0 2 4 6
f (Hz) ×10 6 t (s) ×10 -6

(a) Frequency domain (b) Time domain

Figure 2: Hrrc filter in both frequency and time domain representation. For time domain, both the
Half-root Nyquist filter and the Nyquist filter are shown. The roll-off factor is β = 0.3.

By observing the Hrrc filter in the frequency domain (figure 2), we can already see that applying on

2
any signal will cut frequencies higher than fmax = fsymb . In time domain, two important properties
of the Nyquist filter Hrc (Half-root applied twice) are visible:
• The peak has a unitary height by design, thanks to the normalization
• There will be no Inter Symbol Interference (ISI), as the filter has a zero crossing every t = nT .
This is not the case of the half-root filter.
We can apply those filters on a generated signal to see their effect. Figure 3 shows the result on the real
part of an upsampled QAM symbol sequence. As planned, no ISI can occur thanks to the properties
of Hrrc .

1.5
upsampled symbols
TX signal
RX signal

0.5
Real part of signal

-0.5

-1

-1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
×10 -6

Figure 3: Hrrc and Hrc filters applied on an upsampled sequence (M = 16). For Hrc , the symbol
samples are left untouched.

1.1.3 Noise addition

Once Hrrc is ready, we can introduce the noise between the emitter TX and receiver RX, by adding
this noise to the modulation signal:

yRX (t) = g(−t) ∗ (g(t) ∗ yRX (t) + n(t)) = Hrc ∗ (Hrc ∗ yRX (t) + n(t))

Where n(t) is the noise (AWGN in this case). This noise is complex, as it simulates a carrier noise
that would affect both the I and Q components of the bandpass signal.
The yRX (t) signal still needs to be downsampled by a M factor to yield (perfectly) sampled symbols.
Later, we will also simulate skewing of the sampling by several a sampling time shift.
The obtained symbols are, because of the noise, not perfectly aligned with the constellation. According
to the ML criterion, the best guess for a symbol is the closest point in the constellation. Sometimes,
the displacement caused by the noise is too great to avoid an error, leading to a non-zero bit error rate
(BER). Grey code helps to limit the error to a single bit per symbol in a lot of cases. Figure 4 shows
and example of scatter plot around the constellation.

3
scatter plot of received symbols
1.5

Imaginary part of symbol


0.5

−0.5

−1

−1.5
−1.5 −1 −0.5 0 0.5 1 1.5
Real part of symbol

Figure 4: Scatter plot of the decoded symbols at RX, for 16-QAM, 10000 sent symbols and
EbN0=15dB.

By studying the BER in various cases, we can validate theoretical curves, as shown in 5.

BER curve, PAM BER curve, QAM


10 0 10 0
M=2 M=4
M=4 M=16
M=8 M=64
M=16
10 -1 10 -1
probability of a symbol error

10 -2 10 -2
probability of a symbol error

10 -3 10 -3

10 -4 10 -4

10 -5 10 -5

10 -6 10 -6
0 2 4 6 8 10 12 14 16 18 0 5 10 15 20
SNR per bit SNR per bit

Figure 5: Empirical BER curve for both PAM and QAM.

4
1.2 Q/A

• It is proposed to use the baseband equivalent model of the AWGN channel. Would
it be possible to live with a bandpass implementation of the system?
No, it is not possible to implement a bandpass model in Matlab because the carrier frequency
fcar is extremely high (Ghz). Thus it would require considerable computing power to process
the data at such a high sampling rate (> 2fcar for anti-aliasing reasons).
• How do you choose the sample rate in Matlab?
The sampling frequency fs is the product of the upsampling rate M and the symbol frequency
fsymb . The value of M is arbitrary at this point but should be at least 2, otherwise the Nyquist
filter cannot be built (aliasing in the frequency domain because of the roll-off).
A sufficiently high value of M shows the smooth curves of the Nyquist filter when applied on
the symbols (figure 6). The exact value of M will be more important in the next parts of the
project.

0.6
Hrrc (t) 0.25 Hrrc (t)

0.5
0.2

0.4
0.15
0.3
H(t)

H(t)

0.1
0.2

0.05
0.1

0 0

-0.1
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
t (s) ×10 -6 t (s) ×10 -6

(a) upslamping M=3 (b) upslamping M=16

Figure 6: Hrrc filter for different values of the upsampling frequency.

We can also say that a higher M allows better statistics once the noise is added (more samples
are disturbed by the random noise).
• How do you make sure you simulate the desired Eb/N0 ratio?
The obtained curves for BER as a function of the Eb/N0 fits exactly the theoretical ones. Another
way to prove it would be measuring the generated noise’s energy and comparing it with the
signal’s energy.
• How do you choose the number of transmitted data packets and their length?
This is arbitrary, but a high number of symbols is necessary to obtain sufficiently good statistics
for the BER. The amount of sent symbols simply depends on Nbps :

Nsymb = Nbits /Nbps

• Determine the supported (uncoded) bit rate as a function of the physical bandwidth.
We know the general formula:

Rb = Nbps fsymb

5
The maximum is reached when fsymb is set at the physical bandwidth.
• Explain the trade-off communication capacity/reliability achieved by varying the
constellation size.
By increasing the number of symbols M , if the energy of the signal should remain the same
(same constellation size), this implies the constellation has a higher density. The noise, on the
other hand, keeps the same intensity as before, meaning its effect can more easily move a received
symbol on a different point of the constellation.
• Why do we choose the halfroot Nyquist filter to shape the complex symbols?
The halfroot Nyquist filter has unique properties, as explained before:
1. No ISI, thanks to its zero-crossings
2. It makes the signal more smooth, limiting the use bandwidth to fsymb
Moreover, this allows us to add the complex noise in a realistic way, just before the matching
filter is applied (Hrrc in this case).
• How do we implement the optimal demodulator? Give the optimisation criterion.
We do not simulate demodulation per se, as the carrier I and Q signals are not simulated (as ex-
plained before). That means the matched filter in our simulation does not include the convolution
with a sine/cosine. The only part of it that we simulate is the shaping pulse: Hrrc .
The result is then sampled at exactly t = nTsymb .
The optimization criterion states:
“the match filter should maximize the SNR at its output”
We can theoretically prove that the optimal match filter H(f ) to a signal S(f ) is simply:

H(f ) = S ∗ (f )

Or, in the time domain:

h(t) = S(−t)

Because the half-root Nyquist filter s(t) = hrrc (t) is an even function, we see that h(t) =
hrrc (−t) = hrrc (t). This explains why demodulation is simulated the same way as modulation
(convolution by hrrc (t)).
It is interesting to note that because of the upsampling, the noise of neighboring samples has an
effect on the sampling times t = nTsymb . This is because of the shape of hrrc (t), which can only
guarantee that no interference will occur every nTsymb .
• How do we implement the optimal detector? Give the optimisation criterion.
The demapping function simply picks the closest symbol in the constellation. More formally, the
Maxmimum Likelihood (ML) criterion states:
“The Euclidean distance between the received vector r and the detected symbol sm should be
minimal ”

6
2 Low-density parity check code

2.1 Steps

2.1.1 Small size encoder, hard decoder

When the code vector is relatively small (¡16), it is still realistic to use actual matrix multiplication to
compute it. If we are provided with a valid parity check matrix H in its standard form of size (M, N ),
we can compute G:

H = IN−M |P T
 

⇒ G = [P |IM ]

In our case, the provided H was not standard:


 
0 1 0 1 1 0 0 1
1 1 1 0 0 1 0 0
H=
0

0 1 0 0 1 1 1
1 0 0 1 1 0 1 0

After linear combination of the lines (in modulo 2 logic), we get:


 
1 0 0 0 0 0 0 1
0
0 1 0 0 0 0 1 0
H =
0

0 1 0 0 1 1 1
0 0 0 1 1 0 1 1

Applying this matrix on a correct code vector u will give the same result as H (uH T = [0 . . . 0]) because
of linearity. From H 0 we can easily compute G and apply it on a message d to find the code vector:
u = dG.
The decoder, on the other hand, will not be implemented using the syndrome theory because of the
high amount of cases (large lookup tables would be required). Instead, a Tanner graph implementation
is used. Steps:
• build the graph from the H matrix. The result is M check nodes fj and N variable nodes ci .
Let’s call L the set of links (i, j) of the graph.
• initialize qij , the message from ci to fj with the initial guess from the code vector u:

∀(i, j) ∈ L : qij = ui

• for k = 1, . . . , Niter
1. compute rji , the response from fj to ci :

X X
∀(i, j) ∈ L : rji = qi0 j = qij + q i0 j
i0 ∈L:i0 6=i i0 ∈L

7
The last equation comes from modulo-2 logic: adding qij with itself will not affect the result.
This allows for a fast implementation in matlab using the sum function.
2. compute the estimated code vector ûi with majority voting. The voters for a given ci include
every fj connected to them, as well as the initial guess ui .
3. compute the new qij . This value is now dependent on j, because the majority voting cannot
include fj itself when ci is sending a message back to it.
• keep only the last part of ûi to remove the M parity bits at the beginning:

dˆi = ûi+M , i = 1, . . . , M

Once the implementation was done, we could check that the encoding and decoding behave correctly.
This is done by first encoding an example message:

d = [0110]

⇒ u = dG = [01010110]

We see that the message is repeated at the end, thanks to the identity part of G (systematic code).
Decoding it as is gives exactly d. However, if we try do decode:

u0 = [11010110]

Which is not a valid code vector, it takes 4 iterations to the decoder to find back d.

2.1.2 Hard decoding LDPC

When using large, sparse H matrices (128 × 256) as generated by makeLDPC, we cannot use matrix
multiplication anymore. Instead, we use the provided makeParityChk function.
The simulation is the same as usual, except the data to be sent is first cut into slices of length M and
becomes a series of code vectors of length N .

8
BER curve, 4-QAM
10 -1
no LDPC
K=3
K=5

probability of a symbol error


10 -2

10 -3

10 -4

10 -5
3 3.5 4 4.5 5 5.5 6 6.5 7
SNR per bit

Figure 7: Performance of the hard decoding for 4-QAM. More iterations is better, except when EbN0
is very low: in that case, it is better to not use hard decoding.

2.1.3 Soft decoding LDPC

Soft decoding relies on the probability that the symbol is 0/1. We chose to use the min-sum algorithm
fro two reasons:
• Log-domain algorithms are much easier in terms of computations (sums instead of products).
• The min-sum algorithm provides even more optimization by not requiring the use of the ϕ(x) =
x
log eex −1
+1
function.
The results for BPSK are extremely good, correcting every bit for a sufficiently high EbN0 (above
4dB). Figure 8 shows the results compared to hard LDPC.

BER curve, BPSK


10 0
no parity check
hard LDPC
10 -1 soft LDPC
probability of a symbol error

10 -2

10 -3

10 -4

10 -5

10 -6
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
SNR per bit

Figure 8: Performance of the hard/soft LDPC decoding for BPSK for very low values of EbN0. Soft
decoding always outperforms hard decoding, and is beneficial even for extremely low EbN0.

9
2.2 Q/A

• When building the new BER curves, do you consider the uncoded or coded bit
energy on the x-axis?
The BER curve should only include the uncoded bit energy. We know that sending the parity
bits with the data requires more symbols. However, if the channel has more physical bandwidth
available, we can just assume that the symbol rate is increased to keep the same useful bandwidth.
• How do you limit the number of decoder iterations?
We have noticed that for more iterations than 5 (both hard and soft), the additional gain is
small. When EbN0 is too low, hard decoding is problematic and becomes worse when iterating
more than 1 time (figure 7).
• Why is it much simpler to implement the soft decoder for BPSK or QPSK than for
16- QAM or 64-QAM?
Soft decoding requires knowledge of the reliability of each bit of the code word. For sophisticated
constellations, it is difficult to evaluate the reliability of a given bit, because of the high number
of neighboring symbols. Grey coding almost guarantees that only 1 bit will be affected for low
enough EbN0, but formally we would need to compute the distance to every other symbol in the
constellation to have precise idea of the reliability of each bit.

Figure 9: For 16-QAM, many neighboring symbols have to be considered to compute the reliability of
each bit.

For BPSK, however, it is almost trivial: the sign and the magnitude of the symbol are an
indication of the most probable bit (0 if negative, 1 if positive) and the reliability (better for
high magnitudes). Therefore, it can be used as the raw input of the soft decoding algorithm in
the Log-domain.
• Demonstrate analytically that the parity check matrix is easily deduced from the
generator matrix when the code is systematic.

10
The shape of the G matrix is:

G = [P |IM ]

It is easy to show that H must have the following shape:

H = IN−M |P T
 

This way, H is in accordance with its definition: a matrix that multiplied with a valid code word
u should provide M linearly independent check equations, all yielding 0. Proof:

u = dG

⇒ uH T = dGH T = d(P + P ) = 0

• Explain why we can apply linear combinations on the rows of the parity check matrix
to produce an equivalent systematic code.
As explained earlier, code words u are still compatible with the new matrix as each check equation
will still be verified: each row of H yields 0 when multiplied by u, which means any combination
of lines will also yield 0.
• Why is it especially important to have a sparse parity check matrix (even more
important than having a sparse generator matrix)?
LDPC decoding (hard or soft) implementations are based on the Tanner graph, which means
the number of computations is proportional to the number of links in the graph (and NOT the
number of nodes). This allows a complexity that is less than O(n2 ), where n is the order of
magnitude of M, N . A sparse matrix can get close to a O(n) complexity.
• Explain why the check nodes only use the information received from the other
variable nodes when they reply to a variable node.
The check nodes have the task of guessing a variable node’s value assuming every other node
is correct. This is because the check node can only now that 1 (or more) variable nodes are
wrong if its check equation is not satisfied (uH T 6= 0), but it has no way of knowing which node
is the culprit. Therefore, it simply assumes the variable node they are replying to has to be
reconsidered entirely.

11
3 Time and frequency synchronization

3.1 Steps

If we blindly apply the phase drift and look at the result in the constellation, we can observe two
distinct phenomenons, as illustrated in figure 10:
1. the phase shift makes the constellation rotate by a fixed angle
2. CFO seems to make each symbol rotate by a different value

1 0.8

0.6
Imaginary part of symbol

Imaginary part of symbol


0.5 0.4

0.2

0 0

-0.2

-0.5 -0.4

-0.6

-1 -0.8

-1

-1.5 -1 -0.5 0 0.5 1 1.5 -1 -0.5 0 0.5 1


Real part of symbol Real part of symbol

Figure 10: Constellations showing the effect of a 60ppm CFO (left) and π/20 phase shift (right)
separately, with no noise added.

We can study the effect of the CFO more precisely by trying to compensate the phase drift that
occurred on each symbol (see figure 11). Interestingly, the symbols are still affected by the CFO: this
is due to the mismatch of the matched filter.

1
0.8
0.8
0.6
0.6
0.4
0.4
Imaginary part of symbol

Imaginary part of symbol

0.2 0.2

0 0

-0.2 -0.2

-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
-1

-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1


Real part of symbol Real part of symbol

Figure 11: Constellations showing only the ISI caused by CFO (phase drift has been compensated).
This is using 16 QAM, no noise and, respectively 60ppm and 10ppm cfo.

The BER can be computed when CFO is present. For a 2GHz carrier frequency, we see that 2ppm
gives acceptable results.

12
BER curve caused by CFO (ISI only)
10 0
cfo=0
cfo=2ppm
cfo=10ppm

10 -1

probability of a symbol error


10 -2

10 -3

10 -4
0 2 4 6 8 10 12
SNR per bit

Figure 12: BER curves in presence of CFO with M = 16.

The BER can be computed when CFO is present. For a 2GHz carrier frequency, we see that 2ppm
gives acceptable results.

BER curve caused by time shift


10 0
t0=0 T
t0=0.02 T
t0=0.05 T
10 -1 t0=0.1 T
probability of a symbol error

10 -2

10 -3

10 -4

10 -5
0 2 4 6 8 10 12 14
SNR per bit

Figure 13: BER curves in presence of a time shift.

13
signal
correct sampling
1 Gardner sampling

0.5
real part of the signal

-0.5

-1

365 370 375 380 385 390 395 400


time (symbols)

Figure 14: Sampling time found by the Gardner algorithm compared to the correct sampling time
with a time shift of 0.25T . After a few hundred samples, the algorithm locks onto the correct sampling
time (the red dots are aligned with the blue dots most of the time).

0.3

0.25

0.2

0.15
ǫ

0.1

0.05 K=0.001
K=0.005
K=0.01
0 K=0.02

-0.05
0 100 200 300 400 500 600 700 800 900 1000
time (symbols)

Figure 15: Evolution of the error estimation over time for a time shift of 0.25T . The K parameter
increases the sensitivity of the algorithm, but makes it more unstable.

3.2 Q/A

• Derive analytically the baseband model of the channel including the synchronisation
errors.

14
The CFO and the carrier phase error are implemented by multiplying the received signal by

ej(2π∆f +φ) (1)

where ∆f is the CFO and φ is the phase error.


• How do you separate the impact of the carrier phase drift and ISI due to the CFO
in your simulation?
We can cancel the phase drift by multiplying each symbol with the opposite of the drift. The ISI
is due to the phase hanging slightly over the convolution of a single symbol, and we can study it
separately this way.
• How do you simulate the sampling time shift in practice?
If we increase significantly the oversampling, we can approximate a continuous time shift by a
discrete one.
• How do you select the simulated Eb/N0 ratio?
• How do you select the lengths of the pilot and data sequences?
• In which order are the synchronisation effects estimated and compensated. Why?
• Explain intuitively how the error is computed in the Gardner algorithm. Why is
the Gardner algorithm robust to CFO?
• Explain intuitively why the differential cross-correlator is better suited than the
usual cross-correlator? Isnt interesting to start the summation at k=0 (no time
shift)?
• Are the frame and frequency acquisition algorithms optimal? If yes, give the opti-
misation criterion.

15

Das könnte Ihnen auch gefallen