Beruflich Dokumente
Kultur Dokumente
Outline
Channel coding protects digital data from errors by selectively introducing redundancies in the transmitted data.
block codes: Hamming codes, Cyclic Codes, BCH codes, Reed-Solomon Codes convolutional codes
Trellis Coded Modulation Combine both coding and modulation to achieve significant coding gains without compromising bandwidth efficiency
Repetition Code Linear Block Code, e.g. Hamming Cyclic Code, e.g. CRC BCH and RS Code Convolutional Code
Viterbi Decoding Turbo Code
Coded Modulation
TCM
Parity Check
Hammings Solution
Encoding: H(7,4)
Multiple Checksums
Message=[a b c d]
r= (a+b+d) mod 2 s= (a+b+c) mod 2 t= (b+c+d) mod 2 Message=[1 0 1 0] r=(1+0+0) mod 2 =1 s=(1+0+1) mod 2 =0 t=(0+1+0) mod 2 =1
Code=[ 1 0 1 1 0 1 0 ]
Code=[r s a t b c d]
Coding Gain
For a coding scheme, the coding gain at a given bit error probability is defined as the difference between the energy per information bit required by the coding scheme to achieve the given bit error probability and that by un-coded transmission.
Cyclic code
Cyclic codes are of interest and importance because They posses rich algebraic structure that can be utilized in a
variety of ways. They have extremely concise specifications. They can be efficiently implemented using simple shift register Many practically important codes are cyclic
In practice, cyclic codes are often used for error detection (Cyclic redundancy check, CRC)
Used for packet networks When an error is detected by the receiver, it requests retransmission ARQ
BCH Performance
Reed-Solomon Codes
Convolutional codes map information to code bits sequentially by convolving a sequence of information bits with generator sequences A convolutional encoder encodes K information bits to N>K code bits at one time step Convolutional codes can be regarded as block codes for which the encoder has a certain structure such that we can express the encoding operation as convolution
Encoder
Convolutional codes are applied in applications that require good performance with low implementation cost. They operate on code streams (not in blocks) Convolution codes have memory that utilizes previous bits to encode or decode following bits (block codes are memoryless) Convolutional codes achieve good performance by expanding their memory depth Convolutional codes are denoted by (n,k,L), where L is code (or encoder) Memory depth (number of register stages) Constraint length C=n(L+1) is defined as the number of encoded bits a message bit can influence to
Example
Convolutional encoder, k = 1, n = 2, L=2 Convolutional encoder is a finite state machine (FSM) processing information bits in a serial manner Thus the generated code is a function of input and the state of the FSM In this (n,k,L) = (2,1,2) encoder each message bit influences a span of C= n(L+1)=6 successive output bits = constraint length C Thus, for generation of n-bit output, we require n shift registers in k = 1 convolutional encoders
Assume a three bit message is transmitted [and encoded by (2,1,2) convolutional encoder]. To clear the decoder, two zero-bits are appended after message. Thus 5 bits are encoded resulting 10 bits of code. Assume channel error probability is p = 0.1. After the channel 10,01,10,11,00 is produced (including some errors). What comes after the decoder, e.g. what was most likely the transmitted code and what were the respective message bits?
a b
states
c d
decoder outputs if this path is selected
correct:1+1+2+2+2=8;8 ( 0.11) ! 0.88 false:1+1+0+0+0=2;2 ( 2.30) ! 4.6 total path metric: 5.48
The largest metric, verify that you get the same result! Note also the Hamming distances!
Problem of optimum decoding is to find the minimum distance path from the initial state back to initial state (below from S0 to S0). The minimum distance is the sum of all path metrics ln p (y , x m ) ! g!0 ln p ( y j | xmj ) j
Received code sequence Decoders output sequence for the m:th path
Exhaustive maximum likelihood method must search all the paths in phase trellis (2k paths emerging/ entering from 2 L+1 states for an (n,k,L) code) The Viterbi algorithm gets its efficiency via concentrating into survivor paths of the trellis
DAYANANDA SAGAR COLLEGE OF ENGINEERING, BANGALORE
Assume for simplicity a convolutional code with k=1, and up to 2k = 2 branches can enter each state in trellis diagram Assume optimal path passes S. Metric comparison is done by adding the metric of S into S1 and S2. At the survivor path the accumulated metric is naturally smaller (otherwise it could not be the optimum path)
For this reason the non-survived path can be discarded -> all path alternatives need not to be considered Note that in principle whole transmitted sequence must be received before decision. 2 L nodes, determined However, in practice storing of states for by memory depth 2 k branches enter each node input length of 5L is quite adequate
and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbi decoded output sequence!
states
(Note that for this encoder code rate is 1/2 and memory depth L = 2)
(1)
(1)
(1)
(1) (2)
1
(0)
The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming distance to the received sequence is 4 and the respective decoded sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path. (Black circles denote the deleted branches, dashed lines: '1' was applied)
Turbo Codes
Backgound
Turbo codes were proposed by Berrou and Glavieux in the 1993 International Conference in Communications. Performance within 0.5 dB of the channel capacity limit for BPSK was demonstrated.
Comparison:
Rate 1/2 Codes. K=5 turbo code. K=14 convolutional code.
Theoretical Limit!
Plot is from:
L. Perez, Turbo Codes, chapter 8 of Trellis Coding by C. Schlegel. IEEE Press, 1997
The Turbo-Principle
Turbo codes get their name because the decoder uses feedback, like a turbo engine.
10
-1
1 iteration 10
-2
10 BER
-3
2 iterations
10
-4
6 iterations
-5
3 iterations
10
10 iterations
-6
10
18 iterations
10
-7
0.5
1 E b /N o in dB
1.5
Combine both encoding and modulation. (using Euclidean distance only) Allow parallel transition in the trellis. Has significant coding gain (3~4dB) without bandwidth compromise. Has the same complexity (same amount of computation, same decoding time and same amount of memory needed). Has great potential for fading channel. Widely used in Modem
2. 3.
4.
5. 6.
Set Partitioning
1. 2. 3.
Branches diverging from the same state must have the largest distance. Branches merging into the same state must have the largest distance. Codes should be designed to maximize the length of the shortest error event path for fading channel (equivalent to maximizing diversity). By satisfying the above two criterion, coding gain can be increased.
4.
Coding Gain
About 3dB