Sie sind auf Seite 1von 9

a study of polar codes: A Survey

1rd Author 2nd Author 3rd Author


issame el kaime Abdessalam Ait Madi hassaneErguig
university ibn tofail kenitra university ibn tofail kenitra university ibn tofail
ensak ensak ensak
issame.elkaime@uit.ac.ma aitmadi_abdessalam@yahoo.fr erguigh@yahoo.fr

ABSTRACT polar codes were not proposed until much more recently in 2008
[4]. Owing to this, turbo and LDPC codes have reached a much
greater level of maturity than polar codes, as shown in [5]. In
In this paper, the specific polar codes adopted recently by the 5G particular, turbo and LDPC codes can be found in many consumer
(Fifth-generation) NR (New Radio) interface standard are devices, owing to their inclusion in 3G/4G and WiFi standards,
investigated. Based on B-DMC (Binary Discrete Memoryless respectively. By contrast, polar codes have not yet been adopted
Channel), the purpose of each key component in these codes and in any standards or consumer devices and so their maturity is
the associated operations are explained. In this paper, we present limited to proof of concept demonstrators and academic
an efficient method to construct polar codes. publications[5].
Notably, polar codes have modest encoding and
decoding complexity O(nlogn). Certainly, these codes
polar codes and their variants will find more deployment in many
Keywords other applications and will be included in other new standards in
Polar Code, successive cancelation, channel polarization, erasure the future. Nevertheless, the design of such codes for the next
channel, generation wireless communication systems is still in its infancy.
There are a range of open issues waiting to be addressed. This
• Theory of information→; Error correcting codes → polar codes; why in this work, we investigate these codes and present a method
to construct them.
1. INTRODUCTION
The main idea of the FEC (Forward Error Correction) is to This paper is organized as follows. Section 2 presents a brief
transmit enough redundant data with the useful data, to allow the overview of the coding and decoding algorithms in polar codes.
receiver to correct, by itself, the errors generated by the Section 3 gives an example of polar codes in erasure channel .
transmission channel. In this case, no retransmission from the The conclusion of this proposed work is given in section 4.
transmitter is required.
During the past decade, many FEC, like Turbo codes, RS (Reed
Solomon)[1], BCH (Bose Chaudhuri Hocquenghem)[2], and
LDPC (Low Density Parity Check) codes [3], have been
proposed in the literature to increase the reliability of systems
that use them.
2. 2. Polar codes: Overview oding and
Mobile devices of all shapes and sizes may communicate with ecoding lgorithm
each other, with fixed infrastructure and with satellites Wireless
communications is susceptible to noise, interference, poor signal
strength, jamming etc… Owing to these effects, the symbols in
the received message may differ to the symbols in the transmitted
message. To protect the communication from errors the systems
use the FEC. In this section we will try to give an overview of polar codes and
The 3G (Third Generation) named UMTS (Universal Mobile explain a method of coding and decoding algorithm for polar
Telecommunications Service) and 4G (Fourth Generation) codes.
named LTE (Long Term Evolution) cellular systems, for
example, use the Turbo code [1]. Compared to the 3G and 4G, in
November 2016 3GPP have been agreed to adopt a new FECs 2.1. Polar codes
have been introduced in for the future 5G standard; polar codes
for for both data and control channels and , named LDPC codes In information theory, a polar code is a linear
for the corresponding Data channel and polar codes [R14]. 5G block error correcting code. The code construction
uses is a new radio which interface which holds promise in is based on a multiple recursive concatenation of a
fulfilling new communication requirements that enable short kernel code which transforms the physical
ubiquitous, low-latency, high-speed, and high-reliability channel into virtual outer channels. When the
connections among mobile devices.
number of recursions becomes large, the virtual
While turbo and LDPC codes entered the consciousness of the channels tend to either have high reliability or low
communications community in 1993[1]and 1996 [3]respectively,
n
reliability (in other words, they polarize), and the F denotes the nth tensor power of F[3], [4] and  denotes
data bits are allocated to the most reliable the permutation matrix know as bit reversal.
channels.
The polarization effect brought by polar codes allows dividing the
The polar code is a new FEC invented by Arikan [4] based on a N-bit input vector u between reliable and unreliable bit-channels.
phenomenon called channel polarization. They are proved to The K information bits are assigned to the most reliable bit-
achieve the symmetric capacity of any B-DMC using low channels of u, while the remaining N−K bits, called frozen bits,
complexity encoders and decoders, and their block error are set to a predefined value (usually 0), are assigned to the most
probability is shown to decrease exponentially in the square root unreliable bit-channel. Codeword x is transmitted through the
of the block length [R2]. In fact, two basic channel channel, and the decoder receives the output sequence y = (y0, y1,
transformations lie at the heart of channel polarization. The . . . , yN−1) which is the noisy version of x = (x0, x1, . . . , xN−1).
recursive applications of these transformations result in channel
polarization which refers to the fact that the channels synthesized The polar coding is characterized by transition probabilities
in these transformations become in the limit either almost reliable between the source X and destination Y for which we wrote W:
(perfect and free error) or unreliable (completely noisy). In fact, X Y to denote a generic B-DMC.
channel polarization refers to these two extreme situations. This is X and Y are respectively the input and the output alphabets and
shown by analyzing channel reliability parameters such as the W(x/y) is the transition probabilities where x ϵ X and yϵY the X
symmetric capacity and the Bhattacharya parameter. These two ={0,1}.
parameters stand respectively as measures of communication rate
and reliability of the channel. On the other hand, these two We describe by WN the virtual channel corresponding to N
parameters are used jointly to prove that channel polarization independent uses of given B-DMC W, thus WN : XN YN with
occurs. WN(y1N/x1N)=∏i=1NW(yi/xi).
The operation of channel polarization consists of two phases: a
channel combining and splitting phases. More details about this
Where y1N   y1 , y2 ,..., y N  , x1N  x1 , x2 ,..., x N  and
operation are investigated in the literature [4, 6, R2]. W(yi/xi) are the transition probability corresponding to each use of
the given B-DMC.
In the following two sub-sections we will present the process and
the structure of the proposed encoder and decoder for polar codes. the special characteristic of polar coding is possibility to achieve
the symmetric capacity I(W) of channel which defined through
the formula illustrated by the equation …. . I(W) is a measure of
rate in a channel. It is well-known that reliable communication is
possible over a symmetric B-DMC at any rate less than I(W).
2.2. Coding Algorithm

A polar codes P(N, K) are a linear block code of length N = 2 n I(W)≜ ∑ ∑ W(y/x)log
and rate K/N, and it can be expressed as the concatenation of two
polar codes of length N/2. This is due to the fact that the encoding the channel polarisation have two part first we combine channel
process is represented by a modulo-2 matrix multiplication as then we split them.
illustrated by the equation …..
the channel combining part is a recursive method to build a
x  uGN channel WN by N channels W,
for example W2 are result combining of two copies channel W
where u = (u0, u1, . . . , uN−1) is the input vector, x = (x0, x1, . . . , W2: X2 Y2
xN−1) is the codeword, and GN denotes the generator matrix of with probability of transition .
size N defined by the equation…..: W2(y1y2|u1u2) = W(y1|u1⊕ u2)W(y2|u2)

GN  F n
where denotes the the nth tensor power of F[3], [4].
and the permutation matrix know as bit reversal.
Where:

F denotes the Kronecker the polarizing matrix shown by the


equation….,

1 0
F 
1 1
the mapping u1N  x1N -----> can be described as shown
2 by the equation….
W

x1N  u1N GN
Where from is to the input of channel WN and is the
W2 input of the virtual channel W N.
can be described as :
= where is the matrix generator
W2

figure 1

we repeat the same operation with two copies of W 2 to construct 1 0


W4 figure (2).
W4: X4-------->Y4 W4( )=W2( ⊕ ⊕
)W2( ). 1 1
The input of the channel WN can be written as shown in the
4
equation…
W
u  u A  u AC
W4
Where uA and u AC denote respectively the sets of the
W
2
information and frozen bits.
W4 u=uauuac

W4 Consequently, the input of the channel WN will be written


expressed by the equation…

x1N  u AGN  A  u AC GN  AC 
W
2

= ⊕
W4 denotes the sub-matrix of GN formed at by the rows with
indices in A.
figure 2 A is an arbitrary the subset -matrix of {1… N} of cardinal
A  K . This set refers to the K bites informations positions.
is the complement of A which referrefers to frozen bits.
If we fix A subset and uAC bits and leave uA as a free variable, the
The coding process is a function that shows we will present below N
how the function to construct from . mapping from source blocks u A to codeword blocks x1 will be
Our first implementation of the coding (function coding) will be very easy.
straightforward, but somewhat wasteful in terms of
For example the mapping u14  x14 from the input W4 to the
space. It will make use of one sets of array in parameter. The set 4
input W can be described as shown in the equation …
of arrays is defined as follows. For each 0 to N, we
will have a length array, denoted by N, x14  u14G4

Where G4 is the generator matrix of size 4 defined by the


int * encoded(int *u,int N) equation….
1 0 0 0
1 0 1 0 1 0 0 0
G4   F  2 
1 1 0 0
  G= 4 1 0 1 0
1 1 1 1
1 1 0 0
N=4;K=2; In this example we consider a polar code characterized 1 1 1 1
by N  4 , K  2 , A={1, 3} and ; uAc={0, 0} then the input
of the virtual channel will be described by the equation…..; 1000 1010

the code has the encoder figure(3): X4=(u1,u3) 1 1 0 0 + (0,0) 1 1 1 1

x14  u1 , u3 G4 ( A)  0, 0G4 ( AC )


X4= (u2+u3,u3,0,0)
if (u2,u3)=(1,0)
1 0 0 0 1 0 1 0
x14  u1 , u3   0, 0
X4=(1,0,0,0).
 
1 1 0 0 1 1 1 1
polar codes characterized by C(k,N) where N is the number of bit
x14  u1  u3 , u3 , 0, 0 and k the number of information bit.
the choice of position bit information are very important .
For a source block (u1, u3) = (1, 0), the codeword will be given The choice of the information bits positions in the codeword
by the equation…… influences very clearly the performance of polar codes. Hence,

x14  1, 0, 0, 0
their positions should be chosen carefully, depending on the
Bhattacharya parameter. It can be seen that the smaller value of
this parameter, the more reliable channel is […]
The first step of the channel polarization is channel combining
Tu peux mettre un schema similaire à celui la ici and the next and final step is the channel splitting. The channel
splitting consists of split channel WN to construct N channels

WNi  : X  Y N  X i1 defined by the following transition


probability given in the equation….


WNi  y1N , u1i 1 ui   


WN y1N u1N 
uiN1X N i

Where y N
1 , u1i 1  represents the output of W   N
i
and u i its
input.
We use the channel splitting to construct polar codes that achieve
channel capacity based on the idea that we only send data through
i 
those N channels WN for which the Bhattacharya parameter

  of channel W   is near 0 equivalently the symmetric


Z WNi  N
i

capacity I W  of channel W
 i  i
N is near 1. The parameter
N

Z W  is given by the equation….



N
i

X4=(u1,u2,u3,u4)*G4
Z( )=

∑ ∑ √

To construct a polar code which achieve the symmetric capacity


of a given B-DMC, for each i  A (positions of information

bits), the value Z WN


i 
 
will be chosen among the smallest K
values in the set Z W    : j  1,..., N .
N
j
The N K w00
remaining positions are chosen for frozen bits, the choice of these w0
positions is unspecified and not important.

W01
Proposed Algorithm: W W10
w1
Tu mets ton alagorithme pour le calcul du w11
parameter de bhattacharaya ici et pas le
programme . figure 3

Essaye de s’inspirer de celui de la section 3 Proposed Algorithm:

Tu mets ton alagorithme ici pour la procedure de


codage et pas le programme .

2.2 2.3. Decoding algorithm


we select the information set , by the compute of the channel successive cancellation[4] is the main method of decoding, this
parameter Z(Wi) is measure of how error- prone channel method based of recursively any bit ui are computed by here
is[6]. equivalent in output yi and the precedent value of

the error-prone Z(Wi) in any position i .i∈ N defined


we omit a polar codes with parameters (N,K,A,uAc)
as:
=the source vector
Z( )= uAc is the forsen vector.
∑ ∑ √ . all this bits are transmitted by N channel W i where i [1,N],
the decoder observe ( uAc) ,in reception we found N decision
element, the decoder generate an estimation ̂ of .
error-prone are used to compute the reliability. the decoder estimate exactly the value of frozen bits with 0% error
Z( )= ̂ =uAc, the error can achieve to generate ̂ of UA

∑ ∑ √ . we use the condition below to estimate the value of ̂ [5]:

we compute the error-prone for any channel i (i N) then we


̂
choice smallest k positions they have minimum error-prone to ̂ =ui if i ∊ Ac else ̂ = ̂
construct A.
the (N-k ) positions of vector frozen are unspecified is not ̂
if ≥ 1 => ̂ =0
important. ̂

̂
if <0 => ̂ =1
̂

Our first implementation of the coding (Algorithms 1–


3) will be straightforward, but somewhat wasteful in terms of
space. It will make use of one sets of array in parameter . The set
of arrays is defined as follows. For each 0 to N, we
will have a length array, denoted by N,
successive annulation decoding
Our implementation of the decoding (function decoding) e. It will
make use of two sets of arrays in parameter and the length of bit
information and vector information . The first array define the
output vector Y and pos_a the second vector define the position Yes
of bits information verification crc ?
int * decoding(int *Y, int *pos_a, int K, int N)

NON
disadvantage of successive cancelation
The SC decoding algorithm is optimal for infinite code lengths,
but its error-correction performance degrades quickly at moderate
and short code lengths [4]. In its original formulation, it also classification LLR(i)
suffers from long decoding latency.
in our scenario of decoding we use the old value of , if we have
error in one decision element, this error repercute in the all
estimation.
to resolve this problem other method are developed based of m=m+1
successive cancelation.
list decoding successive cancelation
if we are not sure about the estimation, we save two
value of ̂ . in the final we choose the vector who is most inversion the value of im
trusted by the compute of CRC [7](figure 4).

u3 u4 u5 u6 u7
u1 u2
decoding bits after position i

1 Yes
verification crc ?
0
figure 4 NON

the performances of successive cancelation decoder are very m=Max


important of successive cancelation but we using most resource
of computing.
an other method based to the classification of the likelihood ratio
(lr )in order croissant then we invert the value of bit who have the
fin
small value of Lr and we repeat our compute .the figure below
present the scenario.
figure 5
Proposed Algorithm:

Tu mets ton alagorithme ici et pas le programme .

3. Erasure channel
A binary erasure channel (or BEC) is a common
communications channel model used in coding theory and
information theory. In this model (figure 6), a transmitter sends a EXAMPLE
bit (a zero or a one), and the receiver either receives the bit or it for example the code N=8,k=4, =0.2
receives a message that the bit was not received ("erased").
has the error-prone below

1 Z( )= 0.000003
1-Pe Z( )= 0.003197
Z( )= 0.006147
1 Z( )=0.150653

x Pe Z( )=0.016796
? y Z( )=0.242404
0 Z( )=0.348572
Z( )=0.832228
we choose k position the have the small value of error-prone, in
1-Pe this position we put the information bits
0
so the position of bits information are {8,7,6,4}
for source block u={1,1,1,1}, and u Ac={0,0,0,0} the vector
information is {0,0,0,1,0,1,1,1} (figure 7).
figure 6
p1=0
choise bits information k=4,N=8,erasure=0.2

1 p2=0
W a BEC with erasure probability then each channel
created by channel polarisation is a BEC with erasure probability
that can be computed by the recursion [6].
p3=0
1
=2 -[ ]
p4=1
coding

=[ ] , =
p5=0
Z(W)= ; 1
our implementation to the compute of error-prone presented p6=1
below
algorithm
1
float eps(int X, int Z, float e) p7=1
{
if(X==1 && Z==1)
p8=1
{
return e ;
}
else if(X%2==0 && Z%2!=0) figure 7
{
scanf("tappes 1 %c",&drp); algorithm coding
return (2*eps(X/2,(Z+1)/2,e)-
eps(X/2,(Z+1)/2,e)*eps(X/2,(Z+1)/2,e)); int * coding(int *u,int N)
{
}
else if(X%2==0 && Z%2==0 && Z!=0) if (N==1)
{ { return u;
return (eps(X/2,Z/2,e)*eps(X/2,Z/2,e));
} }
}
else
{ for (e=1;e<=N/2;e++) x1=0 y1
{
x2=1 y2

transition W 8
s[2*e-1]=u[2*e-1]^u[2*e]; x3=1 y3
s[2*e]=u[2*e];
va[e]=s[2*e-1];
x4=0 y4
vb[e]=s[2*e];
}
x5=0 y5
encoded(va,N/2);
x6=1 y6
encoded(vb,N/2);

} x7=1 y7
x8=1 y8
for(e=1;e<=N/2;e++)
{
x[e]=va[e];
x[e+N/2]=vb[e]; figure 9

}
the vector u are the result of coding (figure 10)

return x;
}
y1 u1=0
the figure 8 present the result of coding

y2 u2=0
decoding 8 DEs
information
choise bits

y3 u3=0
y4 u4=1
y5 u5=0
coding

y6 u6=1

y7 u7=1

figure 8 y8 u8=1

figure 10
after transition we receive the vector Y presented in figure 9
we present below the code c to compute the likelihood ratio
float cal_lr(int i, int M)
{
int N;N=16;
if(i==1 && M==1 && lr[1][1]==-9999)
{
lr[1][1]=log(l)/log(10); [R1] "3GPP RAN1 meeting #87 final report". 3GPP. Mis en forme : Normal
return l; Retrieved 31 August 2017
} [R2] Mine Alsan "Channel Polarization and Polar Codes",
https://infoscience.epfl.ch/record/176515/files/main.pdf
else if(lr[i][M]!=-9999) // detect presence lr apres
{
return lr[(i-1)*M+N];
}
else if(M>1)
{
if(i%2==0)
{
return
pow(cal_lr(i/2,M/2),(1-2*u[2*i-1]))*cal_lr(i/2,M/2);
}
else
{
return
(cal_lr((i+1)/2,M/2)*cal_lr((i+1)/2,M/2)+1)/(cal_lr((i+1)/2,M/2)+
cal_lr((i+1)/2,M/2));
}
}
}

5 Conclusion
In this work, we have presented an efficient method of polar codes
construction, we have also explained the purpose of each key
component based en B-DMC.
polar codes and their variant will find more deployment in many
other applications and will be included in other new standards in
the future. for this reason, in future work we will try to analyze
the performance of these codes and implement them in hardware
to meet the requirement of 5G.

6 References
[1] O. Aitsab and R. Pyndiah, “Performance of Reed-Solomon
block turbo code,” in Proceedings of GLOBECOM’96. 1996
IEEE Global Telecommunications Conference, London, UK,
1996, vol. 1, pp. 121–125.
[2] J. Massey, “Step-by-step decoding of the Bose-Chaudhuri-
Hocquenghem codes,” IEEE Trans. Inf. Theory, vol. 11, no.
4, pp. 580–585, Oct. 1965.
[3] R. Gallager, “Low-density parity-check codes,” IEEE Trans.
Inf. Theory, vol. 8, no. 1, pp. 21–28, Jan. 1962.
[4] E. Arikan, “Channel polarization: A method for constructing
capacity-achieving codes for symmetric binary-input
memoryless channels,” ArXiv08073917 Cs Math, Jul. 2008.
[5] R. G. Maunder, “The 5G channel code contenders,”
ACCELERCOMM White Pap., pp. 1–13, 2016.
[6] E. arikan ardal, “Channel Polarization: A Method for
Constructing Capacity-Achieving Codes.”,………..
[7] I. Tal and A. Vardy, “List decoding of polar codes,” in 2011
IEEE International Symposium on Information Theory
Proceedings, St. Petersburg, Russia, 2011, pp. 1–5.

Das könnte Ihnen auch gefallen