Sie sind auf Seite 1von 30

IMPLEMENTATION OF LOW-DENSITY PARITY CHECK DECODER

(A REPORT FOR VLSI TOOLS)

Submitted by CH.PHANIRAJA SRINIVAS(200641014) G.PAVAN KUMAR(200641028) M.Tech VLSI and Embedded Systems

International Institute of Information Technology (Deemed University) Hyderabad

ACKNOWLEDGEMENTS
This is an acknowledgement of the intensive drive and technical competenca of many individuals who have contributed to the sucess of our project. We are grateful to Dr.M.B.Srinivas for providing all the necessary facilities and for his support during this project. We are obliged and grateful to Mr.JVR.Ravindra for his valuable suggestions, sagacious guidance in all respects during the period of this project.

Contents
1 INTRODUCTION 2 A BASIC COMMUNICATION SYSTEM 2.1 Basic Communication System . . . . . . . . . . . . . . . . . . . . . 2.2 Shannons Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 3 LOW DENSITY PARITY CHECK 3.1 History . . . . . . . . . . . . . . . 3.2 Brief review of LDPC codes . . . . 3.3 Importance of LDPC codes . . . . 3.4 LDPC Code Construction . . . . . CODES . . . . . . . . . . . . . . . . . . . . . . . . 4 5 5 5 8 8 8 10 10 11 11 12 13 13 20 22 22 22 23 29 30

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

4 DECODER FOR LDPC CODES 4.1 Decoding of LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . 4.2 Belief Propagation Decoding . . . . . . . . . . . . . . . . . . . . . 5 SOURCE CODE 5.1 verilog code for LDPC decoder . . . . . . . . . . . . . . . . . . . . 5.2 C code for generating all valied codes for a given H matrix . . . . 6 EXPERIMENTAL RESULTS 6.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Steps followed in Simulation process: . . . . . . . . . . . . . . . . . 6.3 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 CONCLUSION 8 REFERENCES

List of Figures
1 2 3 4 5 6 7 8 Sparse Bipartite graphs . Simulation output . . . . Critical Path . . . . . . . Technology Schematic(1) . Technology Schematic(2) . RTL Schematic(1) . . . . RTL Schematic(2) . . . . RTL Schematic(3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 23 24 25 26 27 27 28

INTRODUCTION

LDPC codes are one of the hottest topics in coding theory today. Originally invented in the early 1960s, they have experienced an amazing comeback in the last few years. Unlike many other classes of codes LDPC codes are already equipped with very fast (probabilistic) encoding and decoding algorithms. In the past few years, the Low Density Parity Check (LDPC) codes have received lot of attention for their excellent performance, and inherent parallelism involved in decoding them. In this present project, a Low-Density Parity Check decoder is implemented using VERILOG coding technique. The decoder designed here can detect and correct the errors of the incoming, encoded data signal of 12 bits length. For simulation, Active HDL6.3 is used and for synthesis, Mentor Graphics Leonardo Spectrum with vertix IV technology is used.

2
2.1

A BASIC COMMUNICATION SYSTEM


Basic Communication System

A basic communication system is composed of three parts: a transmitter, channel, and receiver. Transmitted information becomes altered due to noise corruption and channel distortion. To account for these errors, redundancy is intentionally introduced, and the receiver uses a decoder to make corrections. In error correcting control (ECC), it is pertinent to have a high code rate while maintaining low complexity. Soft-input-soft-output (SISO) decoders dier in that they have as inputs and outputs probabilistic information, instead of sequences of bits. SISO decoders use digital signal processing and therefore are more robust, but result in a high complexity. In addition, SISO decoders ease in passing information in an iterative fashion. Iterative decoding is a new and powerful technique for error correction in communication systems; it leads to a signicant improvement in bit error rate (BER) over conventional decoders.

Block diagram of the communication system

2.2

Shannons Theorem

The year 1948 marks the birth of information theory. In that year, Claude E. Shannon published his epoch making paper on the limits of reliable transmission of data over unreliable channels and methods on how to achieve these limits. Among other things, this paper formalized the concept of information, and established bounds for the maximum amount of information that can be transmitted over unreliable channels. A communication channel is usually dened as a triple consisting of an input alphabet, an output alphabet, and for each pair (i,o) of input and output elements a transition probability p(i,o). Semantically, the transition probability is the probability that the symbol o is received given that i was transmitted over the channel. Given a communication channel, Shannon proved that there exists a number, called the capacity of the channel, such that reliable transmission is possible for rates arbitrarily close to the capacity, and reliable transmission is not possible for rates above capacity.

The notion of capacity is dened purely in terms of information theory. As such it doesnt guarantee the existence of transmission schemes that achieve the capacity. In the same paper Shannon introduced the concept of codes as ensembles of vectors that are to be transmitted. It is clear that if the channel is such that even one input element can be received in at least two possible ways (albeit with dierent probabilities), then reliable communication over that channel is not possible if only single elements are sent over the channel. To achieve reliable communication, it is thus imperative to send input elements that are correlated. This leads to the concept of a code, dened as a (nite) set of vectors over the input alphabet. We assume that all the vectors have the same length, and call this length the block length of the code. If the number of vectors is K = 2k, then every vector can be described with k bits. If the length of the vectors is n, then in n times use of the channel k bits have been transmitted. We say then that the code has a rate of k/n bits per channel use, or k/n bpc. Suppose now that we send a codeword, and receive a vector over the output alphabet. If the channel allows for errors, then there is no general way of telling which codeword was sent with absolute certainty. However, we can nd the most likely codeword that was sent, in the sense that the probability that this codeword was sent given the observed vector is maximized. To see that we really can nd such a codeword, simply list all the K codewords, and calculate the conditional probability for the individual codewords. Then nd the vector or vectors that yield the maximum probability and return one of them. This decoder is called the maximum likelihood decoder. It is not perfect: it takes a lot of time when the code is large and it may err; but it is the best to do. Shannon proved the existence of codes of rates close to capacity for which the probability of error of the maximum likelihood decoder goes to zero as the block length of the code goes to innity (the decoding error of the maximum likelihood decoder goes to zero exponentially fast with the block length). Codes that approach capacity are very good from a communication point of view, but Shannons theorems are non-constructive and dont give a clue on how to nd such codes. Design of codes with ecient encoding and decoding algorithms which approach the capacity of the channel is important. Let us consider an example of two communication channels: 1) The binary erasure channel (BEC) 2) The binary symmetric channel (BSC).

These channels are described in Figure. In both cases the input alphabet is

binary, and the elements of the input alphabet are called bits. In the case of the binary erasure channel the output alphabet consists of 0, 1, and an additional element denoted e and called erasure. Each bit is either transmitted correctly (with probability 1-p), or it is erased (with probability p). The capacity of this channel is 1-p. In the case of the BSC both the input and the output alphabet is F2. Each bit is either transmitted correctly with probability 1-p, or it is ipped with probability p. This channel may seem simpler than the BEC at rst sight, but in fact it is much more complicated. The complication arises since it is not clear which bits are ipped. (In the case of the BEC it is clear which bits are erased.). The capacity of this channel is (log2 + p log(p) + (1 - p) log (1 - p)) / log2. Maximum likelihood decoding for this channel is equivalent to nding, for a given vector of length n over F2, a codeword that has the smallest Hamming distance from the received word. It can be shown that maximum likelihood decoding for the BSC is NP-complete. In contrast, for linear codes maximum likelihood decoding on the BEC is polynomial time, since it can be reduced to solving a system of equations.

3
3.1

LOW DENSITY PARITY CHECK CODES


History

Low-density parity-check codes were introduced by Gallager in 1962. Soon after their invention, they were largely forgotten, and reinvented several times for the next 30 years. LDPC codes have been largely neglected by the scientic community for several decades until the remarkable success of Turbo codes that invoked the re-discovery of LDPC codes, pioneered by MacKay and Neal and Wiberg. Their comeback is one of the most intriguing aspects of their history, since two dierent communities reinvented codes similar to Gallagers LDPC codes at roughly the same time. The past few years experienced signicant improvement in LDPC code construction and performance analysis.

3.2

Brief review of LDPC codes

LDPC codes are linear codes obtained from sparse bipartite graphs. Suppose that G is a graph with n left nodes (called message nodes) and r right nodes (called check nodes). The graph gives rise to a linear code of block length n and dimension at least n - r in the following way: The n coordinates of the codewords are associated with the n message nodes. The codewords are those vectors (c1; : : : ; cn) such that for all check nodes the sum of the neighboring positions among the message nodes is zero. Figure above, gives an example.

Figure 1: Sparse Bipartite graphs

A low density parity check code is one where the parity check matrix is binary and sparse, where most of the entries are zero and only a small fraction are 1s. In its simplest form the parity check matrix is constructed at random subject to some rather weak constraints on H.A t - regular LDPC is one where the column weight (number of ones) for each column is exactly t resulting in an average row weight of nt / (n - k). One might x the row weight to be exactly s = nt / (n k). An (s, t)-regular LDPC is one where both row and column weights are xed. The following parity check matrix H is an LDPC matrix with t = 2 and for any valid codeword c

The graph representation is analogous to a matrix representation by looking at the adjacency matrix of the graph: let H be a binary r-n matrix in which the entry (i; j) is 1 if and only if the ith check node is connected to the jth message node in the graph. Then the LDPC code dened by the graph is the set of vectors C = (c1; : : : ; cn) such that (H.TransposeC) = 0. The matrix H is called a parity check matrix for the code. Conversely, any binary r-n matrix gives rise to a bipartite graph between n message and r check nodes, and the code dened as the null space of H is precisely the code associated to this graph. Therefore, any linear code has a representation as a code associated to a bipartite graph (note that this graph is not uniquely dened by the code). However, not every binary linear code has a representation by a sparse bipartite graph. If it does, then the code is called a low-density parity-check (LDPC) code. The sparsity of the graph structure is key property that allows for the algorithmic eciency of LDPC codes.

3.3

Importance of LDPC codes

Low density parity check (LDPC) codes are originally devised to exploit low decoding complexity by constructing sparse parity check matrices. Though the LDPC code does not have a maximized minimum distance due to the randomly generated sparse parity check matrix, the typical minimum distance increases linearly as the block length increases. Moreover, the error probability decreases exponentially for a suciently long block length, whereas the decoding complexity is linearly proportional to the code length. Recent simulation results show that the LDPC code can achieve a performance that is within 0.04 dB of Shannon limit and the performance is close to that of the turbo code if the block length is larger than 1000 bits. Despite of these advantages, when the LDPC code was rst introduced, it made a little impact on the information theory community because of the storage requirements in encoding and the computational complexity in decoding. Modern VLSI technology is so advanced that it enables parallel architectures exploiting the benet of inherently parallel LDPC decoding algorithms.

3.4

LDPC Code Construction

To achieve good performance, LDPC codes should have the following properties: (a) Large code length: The performance improves as the code length increases, and the code length cannot be too small (at least 1K) (b) Not too many small cycles: Too many small cycles in the code bipartite graph will seriously degrade the error-correcting performance (c) Irregular node degree distribution: It has been well demonstrated that carefully designed LDPC codes with irregular node degree distributions remarkably outperform regular ones.

10

DECODER FOR LDPC CODES

If we consider LDPC codes with 6-bit length each one having 3 message bits and 3 parity bits, then they correspond to valid codewords: (i.e., 000000, 011001, 110010, 111100, 101011, 100101, 001110, 010111). Thus this LDPC code fragment represents a 3-bit message encoded as 6 bits. The purpose of this redundancy is to aid in recovering from channel errors. Then the parity-check matrix representing this graph fragment is as below.

In this matrix, each row represents one of the three parity-check constraints, whereas each column represents one of the six bits in the received codeword

4.1

Decoding of LDPC Codes

Decoding a LDPC code exactly is an NP-complete problem, but belief propagation leads to good approximate decoder.For example, consider that the valid message, 101011, from the example above is transmitted across a binary erasure channel and received with the 1st and 4th bit erased to yield ?01?11. We know that the transmitted message must have satised the code constraints which we can represent by writing the received message on the top of the factor graph as shown below. Belief propagation is particularly simple for the binary erasure channel and consists of iterative constraint satisfaction. In this case, the rst step of belief propagation is to realize that the 4th bit must be 0 to satisfy the middle constraint.

Now that we have decoded the 4th bit, we realize that the 1st bit must be a 1 to satisfy the leftmost constraint.

11

Thus we are able to iteratively decode the message encoded with our LDPC code. For other channel models, the messages passed between the variable nodes and check nodes are real numbers which express probabilities and likelihoods of belief.

We can validate this result by multiplying the corrected codeword r by the parity-check matrix H:

4.2

Belief Propagation Decoding

In this project, a technique for inference called belief propagation is used. The technique allows one to take a graph describing the inter-relationships between variables in a system and obtain approximate marginalised probabilities of each variable. Belief propagation can then be used to approximately infer the state of hidden variables from observed variables. To be more specic, we will look initially at belief propagation decoding of low-density parity-check codes on a memoryless channel; this is the best known practical decoding algorithm.In this case, the observed variables are as received from the channel and the hidden variables represent the true transmitted codeword.

12

5
5.1

SOURCE CODE
verilog code for LDPC decoder

ifdef _VCP else define library(a,b) endif

// ---------- Design Unit Header ---------- // timescale 1ps / 1ps module ldpc4 (i,o,resend) ; // ------------ Port declarations --------- // input [0:11] i;//i0,i1,i2,i3,i4,i5; wire [0:11] i;//i0,i1,i2,i3,i4,i5; //input i6,i7,i8,i9,i10,i11; //wire i6,i7,i8,i9,i10,i11; output [0:5] o; output resend; reg [0:5] o; reg resend; // ----------- Signal declarations -------- // reg r1; reg r2; reg r3; reg r4; reg r5; reg r6; reg r7; reg r8; reg r9; reg r10; reg r11; reg r12; reg r13; reg r14; reg r15; reg r16;

13

reg reg reg reg reg reg reg reg reg reg reg reg reg reg reg reg

r17; r18; r19; r20; r21; r22; r23; r24; r25; r26; r27; r28; r29; r30; resend1; resend2;

// -------------- Always processes ---------// always @(i[0] or i[1] or i[2] or i[3] or i[4] or i[5]) begin r12=i[0] ^ i[1]; r13=i[2] ^ i[3]; r1=r12 ^ r13; r14=i[2] ^ i[3]; r2=r14 ^ i[5]; r15=i[0] ^ i[3]; r3=r15 ^ i[4]; end always @(r1 or r2 or r3) begin if (r1 == 1b0 && r2 == 1b0 && r3 == 1b1) begin r4 = 1b0; r5 = 1b1; r6 = 1b0; end else if (r1 == 1b0 && r2 == 1b1 && r3 == 1b0) begin r4 = 1b0; r5 = 1b0; r6 = 1b1;

14

end else if (r1 == 1b1 begin r4 = 1b1; r5 = 1b0; r6 = 1b0; end else begin r4 = 1b0; r5 = 1b0; r6 = 1b0; end end always @(r4 or r5 begin if (r4 == 1b0 begin r7 = 1b0; r8 = 1b1; end else if (r4 == 1b0 begin r7 = 1b1; r8 = 1b0; end else if (r4 == 1b1 begin r7 = 1b0; r8 = 1b0; end else if (r4 == 1b0 begin r7 = 1b1; r8 = 1b1; end end

&& r2 == 1b1 && r3 == 1b1)

or r6) && r5 == 1b1 && r6 == 1b0)

&& r5 == 1b0 && r6 == 1b1)

&& r5 == 1b0 && r6 == 1b0)

&& r5 == 1b0 && r6 == 1b0)

always @(r7 or r8 or i[0] or i[1] or i[2] or i[3] or i[4] or i[5])

15

begin if (r7 == 1b0 && r8 == 1b0) begin r9 = ~i[3]; r10 = i[4]; r11 = i[5]; o[0] = r9; o[1] = r10; o[2] = r11; end else if (r7 == 1b0 && r8 == 1b1) begin r9 = i[3]; r10 = ~i[4]; r11 = i[5]; o[0] = r9; o[1] = r10; o[2] = r11; end else if (r7 == 1b1 && r8 == 1b0) begin r9 = i[3]; r10 = i[4]; r11 = ~i[5]; o[0] = r9; o[1] = r10; o[2] = r11; end else if (r7 == 1b1 && r8 == 1b1) begin r9 = i[3]; r10 = i[4]; r11 = i[5]; o[0] = r9; o[1] = r10; o[2] = r11; end end always @(r1 or r2 or r3) begin if((r1==0 && r2==1 && r3==1) || (r1==1 && r2==0 && r3==1) || (r1==1 && r2==1 && r3==0) || (r1==1 && r2==0 && r3==0)) begin

16

resend1=1b1; end else begin resend1=1b0; end end //second module always @(i[6] or i[7] or i[8] or i[9] or i[10] or i[11]) begin r27=i[6] ^ i[7]; r28=i[8] ^ i[9]; r16=r27 ^ r28; r29=i[8] ^ i[9]; r17=r29 ^ i[11]; r30=i[6] ^ i[9]; r18=r30 ^ i[10]; end always @(r16 or r17 or r18) begin if (r16 == 1b0 && r17 == 1b0 && r18 == 1b1) begin r19 = 1b0; r20 = 1b1; r21= 1b0; end else if (r16 == 1b0 && r17 == 1b1 && r18 == 1b0) begin r19 = 1b0; r20 = 1b0; r21 = 1b1; end else if (r16 == 1b1 && r17 == 1b1 && r18 == 1b1) begin r19 = 1b1; r20 = 1b0; r21 = 1b0; end else begin

17

r19 = 1b0; r20 = 1b0; r21 = 1b0; end end always @(r19 or r20 or r21) begin if (r19 == 1b0 && r20 == 1b1 begin r22 = 1b0; r23 = 1b1; end else if (r19 == 1b0 && r20 == begin r22 = 1b1; r23 = 1b0; end else if (r19 == 1b1 && r20 == begin r22 = 1b0; r23 = 1b0; end else if (r19 == 1b0 && r20 == begin r22 = 1b1; r23 = 1b1; end end

&& r21 == 1b0)

1b0 && r21 == 1b1)

1b0 && r21 == 1b0)

1b0 && r21 == 1b0)

always @(r22 or r23 or i[6] or i[7] or i[8] or i[9] or i[10] or i[11]) begin if (r22 == 1b0 && r23 == 1b0) begin r24 = ~i[9]; r25 = i[10]; r26 = i[11]; o[3] = r24; o[4] = r25; o[5] = r26; end else if (r22 == 1b0 && r23 == 1b1) begin r24 = i[9];

18

r25 = ~i[10]; r26 = i[11]; o[3] = r24; o[4] = r25; o[5] = r26; end else if (r22 == 1b1 && r23 == 1b0) begin r24 = i[9]; r25 = i[10]; r26 = ~i[11]; o[3] = r24; o[4] = r25; o[5] = r26; end else if (r22 == 1b1 && r23 == 1b1) begin r24 = i[9]; r25 = i[10]; r26 = i[11]; o[3] = r24; o[4] = r25; o[5] = r26; end end always @(r16 or r17 or r18 or r19 or r20 or r21) begin if((r16==0 && r17==1 && r18==1) || (r16==1 && r17==0 && r18==1) || (r16==1 && r17==1 && r18==0) || (r16==1 && r17==0 && r18==0)) begin resend2=1b1; end else begin resend2=1b0; end end always @(resend1==1 || resend2==1) begin if(resend1==1 || resend2==1) begin resend=1b1; end

19

else begin resend=1b0; end end endmodule

5.2

C code for generating all valied codes for a given H matrix

#include<stdio.h> int main() {

int p[100][100],q[100][1024],array[100][1024],i,j,k,a,b,c,ans=0,m,n,power=1,count= printf("order mxn"); scanf("%d %d",&m,&n); for(i=0;i<m;i++) for(j=0;j<n;j++) scanf("%d ",&p[i][j]); for(i=0;i<n;i++) power=power*2;

for(a=0;a<power;a++) { c=a; // printf("binary:"); for(b=0;b<n;b++) { if(c%2==0) q[b][a]=0; else q[b][a]=1; c=c/2; } // for(i=0;i<n;i++) // printf("%d ",q[i][1]); // printf("\n");

20

for(j=0;j<m;j++) for(k=0;k<n;k++) { array[j][a]=array[j][a]+p[j][k]*q[k][a]; } //// printf(" multiplied code:"); for(i=0;i<m;i++) { // printf("%d ",array[i][a]); if(array[i][a]%2==0) array[i][a]=0; if(array[i][a]%2!=0) array[i][a]=1; } // printf("\n"); // printf("array:"); for(i=0;i<m;i++) { ans=ans+array[i][a]; // printf("%d ",array[i][a]); } // printf("\n"); if(ans==0) { printf("code is:"); for(i=0;i<n;i++) printf("%d ",q[i][a]); printf("\n"); count++; } ans=0;

return 0; }

21

6
6.1

EXPERIMENTAL RESULTS
Simulation

Simulation is done to check the correctness of the code. A Verilog code is designed and simulated to check for the errors like syntax errors, logical errors,e.t.c. Once the simulation is done and code is ready, the next step would be synthesis. One thing, every Verilog designer should remember is that every code that is simulated is not systhesisable. Simulation in this present project is done using ACTIVE HDL 6.3. The algorithm is rst developed and is coded using this software. The algorithm logic, and coding errors are corrected by observing the results obtained and nally an error free Verilog code for LDPC decoder is generated.

6.2

Steps followed in Simulation process:

1.First step in any coding is development of the algorithm to be coded. 2.Once the algorithm is ready, coding is started and dierent modules are developed. 3.These modules are independently checked for errors and later they are assembled to form the exact code. 4. The whole code is also checked for errors and logic is checked by simulating the code.

22

Figure 2: Simulation output

6.3

Synthesis

It is the nal step in the design procedure. Once the code is simulated, synthesis is done. If the simulated code is not synthesisable, corrections have to be done to synthesis the code. Synthesis is done using the software, Mentor Graphics Leonardo Spectrum. And the technology used is Vertix IV. In synthesis, results like Chip Area, Delay, e.t.c, are observed. The Chip Area is found out by knowing the number of Look-Up Tables(LUTs). The delay gives the propagation delay. Number of Input pins, Output pins are also obtained during synthesis. The following Table shows the results obtained during the synthesis of the code which is developed to implement the LDPC decoder.

23

Figure 3: Critical Path

24

Figure 4: Technology Schematic(1)

25

Figure 5: Technology Schematic(2)

26

Figure 6: RTL Schematic(1)

Figure 7: RTL Schematic(2) 27

Figure 8: RTL Schematic(3)

28

CONCLUSION

The decoder for LDPC codes is implemented with the use of the Bipartite graph. This graph is obtained from the given H matrix. The code is rst simulated and then synthesised to obtain detailed analysis of the decoder designed. We observed that, a high throughput LDPC decoding architectures should exploit the benet of parallel decoding algorithms while reducing the interconnection complexity.The partially parallel architecture is a good tradeo between throughput and hardware cost. Future scope of study The fully parallel LDPC decoding architecture can achieve high decoding throughput, but it suers from large hardware complexity caused by a large set of processing units and complex interconnections. A practical solution of areaecient decoders is to use the partially parallel architecture in which a PU is shared for a several rows or columns. It is important in the partially parallel architecture to determine the rows or columns to be processed in a PU and their processing order. The dependencies between rows and columns should be considered to minimize the overall processing time by overlapping the decoding operations.

29

REFERENCES

R. G. Gallager, Low density parity check codes, IRE Trans. Info.Theory, vol. IT-8, Jan. 1962.

Error-correction on non-standardcommunication channels, Edward A. Ratzer, Christs College, Cambridge,2003.

LDPC Codes: An Introduction Amin Shokrollahi, Digital Fountain, Inc.39141 Civic Center Drive, Fremont, CA 94538, April 2, 2003

Near Shannon limit performance of low density parity check codes,D.J.C. MacKay and R.M. Neal, ELECTRONICS LETTERS 13th March 1997 Vol. 33 No. 6

E. Yeo, P. Pakzad, B. Nikolic, V. Anantharm, VLSI Architectures for Iterative Decoders In Magnetic Recording Channels, IEEE Transactions on Magnetics, Vol. 37, March 2001

Verilog HDL- guide to digital design and synthesis-by Sameer palnitkar

30

Das könnte Ihnen auch gefallen