Sie sind auf Seite 1von 4

2000 Conference on Information Sciences and Systems, Princeton University, March 15-17, 2000

Thoughts on Expander Codes: Codes via Irregular Graphs


S. Kim and S. B. Wicker1
School of Electrical Engineering
Cornell University
Ithaca, NY 14583
Abstract Randomized constructions are presented
for a family of linear-time encodable and decodable error-correcting codes using irregular expander
graphs. These codes can be encoded in constant time
and decoded in at most logarithmic time if a linear
number of processors are used.

I. Introduction
In this paper we construct a family of linear-time encodable and decodable error-correcting codes. These codes
can also be encoded by circuits of linear-size and constant depth and decoded by circuits of linear-size and at
most logarithmic depth. The size of a circuit is defined
as the number of vertices, while the depth of a circuit is
defined as the maximum length of a directed path in the
circuit. The codes are constructed through the cascading
of multiple copies of the error reducing codes developed
by Spielman [3]. The expansion properties of graphs are
used in calculating the number of errors that can be corrected by our codes. Unlike Spielman, who used regular
expanders in his construction, we consider the use of irregular expanders. This consideration is motivated by a
recent indication that irregular graphs give better decoding performance than regular graphs [2].
The cascading method that we use in our construction was originally developed by Luby et al. for the construction of erasure codes [1]. In the construction considered here, encoders for the error reducing codes are
cascaded from left (outermost) to right (innermost), with
the cascade terminated on the right by an encoder for an
error-correcting code. Decoding begins with the innermost code, the error-correcting code, followed by decoding for the successive error reducing codes.
In the next section, we construct error reducing codes
and their associated decoding algorithms. Two bounds
are presented for the number of errors that can be reduced
for codes that are derived from irregular expanders. In
section 3, we cascade the error reducing codes constructed
in section 2 to construct a new code that is an errorcorrecting code. Section 4 gives a brief conclusion of our
result.
II. Error Reducing Codes
The error reducing codes considered in this paper are
defined using bipartite graphs. Figure 1 shows a simple
bipartite graph that represents a (7, 4) Hamming code.
1 This

work was supported by NSF Grant NCR-9725251.

Figure 1: Bipartite graph representing a (7,4) Hamming


code
The check bits on the right side of the graph are defined as the XOR of the neighboring message bits (i.e.
those connected to the check bit by an edge). To prevent
needlessly cumbersome locution, we will refer to message
nodes as left nodes and check node as right nodes. We
will further use x and c to represent left and right nodes,
respectively.
A regular bipartite graph is a bipartite graph in which
the degrees of all left nodes are equal, as are the degrees
of all right nodes. An irregular bipartite graph is a bipartite graph that, obviously, does not meet these conditions.
For example, the graph in Figure 1 is irregular, since the
left nodes do not all have the same degree. In this paper
we consider error reducing codes derived from irregular
bipartite graphs. We will restrict our attention to irregular graphs in which the average right node degree be
independent of the code length, insuring that our error
reducing code is linear-time encodable.
Let us now define an error reducing code. Roughly
speaking, if the number of message and check bits that
are corrupted during transmission is not too great, then
an error reducing code is able to correct some fraction of
the corrupted message bits. The check bits are left in the
condition in which they were received . The number of
corrupted message bits is thus reduced, hence the name
of this class of codes.
Definition 1 A code R of rn message bits and (1 r)n
check bits is an error reducing code of rate r, error reduc-

tion , and reducible distance if there exists an algorithm


that, given an input word that differs from a codeword
w R in at most n message bits and n check
bits, outputs a word that differs from w in at most
messages bits.
We denote the code R defined by a bipartite graph B
as R(B). To present the algorithms for decoding error
reducing codes, it will be convenient to define a check
cj to be satisfied if the XOR of the bits of the adjacent
message nodes is equal to the bit of the check node cj . If
this is not the case, the constraint is said to be unsatisfied.
We flip a variable by complementing its value.
Two simple decoding algorithms are associated with
error reducing codes: the Simple Sequential Error Reducing Algorithm and the Simple Parallel Error Reducing
Algorithm.
Simple Sequential Error Reducing Algorithm:
If there is a message bit that has more unsatisfied
than satisfied neighbors, then flip the value of that
message bit.
Repeat until no such message bit remains
Simple Parallel Error Reducing Algorithm:
If there are message bits that have more unsatisfied
than satisfied neighbors, then flip the values of those
message bits.
We cannot append the step Repeat until no such message bit remains to the Simple Parallel Error Reducing
Algorithm, for this may create a condition in which the
algorithm does not halt. We will later state explicit conditions under which the algorithm is guaranteed to halt
in a finite amount of time.
To calculate error reduction and reducible distance
bounds, we use the expansion properties of irregular bipartite graphs.
Definition 2 A bipartite graph is an (, ) expander if
any subset S consisting of at most a fraction of left
nodes has at least |(S)| right node neighbors, where
(S) is the set of edges attached to nodes in S.

Proof: Let Es be the event that s left nodes have at most


as neighbors where a is the average degree of these s left
nodes.

n
n as as
Pr(Es )
s
as
n
(a) ne s ne as as as

s
as
n
s asass a (1)as
=
e(1+a)s
n

s ( a4 a1)s
=
cs
n
k
where (a) follows from the inequality nk ne
, and
k
c is a constant that depends on a, and . Choose so
that a4 a 54 . Then
n
X

s=1

Pr(Es )

n 4 s
X
sc 4
s=1

1
O( )
n

which finishes the proof of the lemma.

2
Theorem 4 If B is an irregular (, 34 + dx,min
) expander
where dx,min is the minimum degree on the left nodes of
B, then R(B) is an error reducing code of error reduc
tion 12 and reducible distance 2dx,max
, where dx,max is the
maximum degree on the left nodes of B.

Proof: The proof follows from a demonstration that the


Simple Sequential Error Reducing Algorithm is the algorithm that we need. Let message and check bits be
corrupt and set such that is the number of edges
connected to corrupt message bits. Let u be the number
of unsatisfied check bits, s the number of satisfied check
bits whose neighbors are corrupt message bits, and n the
n
number of left nodes. Initially, , 2dx,max
. We use
the following claim to prove the theorem.
Claim: 2 n there is a message bit whose
value is flipped by the execution of the algorithm.
By expansion of the graph, we have
3
2
u+s>( +
)
4 dx,min

We will sometimes refer to an (, ) expander of rn left


nodes and (1 r)n right nodes as an (rn, (1 r)n, , )
expander. We now state the two theorems of this section.
The theorems build upon the good expansion property of
a randomly chosen graph, as proven in the lemmas that
precede each theorem. The proofs are similar to, but
slightly more general than those given in [3].

and by definition of satisfied and unsatisfied check bits,


we must have
+ u + 2s.

Lemma 3 Let B be a bipartite graph with n left nodes


and n right nodes for some > 0. Moreover the minimum left node degree is at least 5. Then with probability
1 O( n1 ), B is an (, ) expander for some > 0, 0
and = 34 + .

Thus, since u >


2 + when 2 n, there is
some message bit that has more unsatisfied than satisfied
neighbors, which completes the proof of the claim.
Now the claim tells us that if the algorithm halts, then
< 2 or > n. We show by contradiction that if

Combining the above two inequalities yields,


u>

4
+
.
2
dx,min

(1)

n
the algorithm halts then < 2 . Since 2dx,max
and
n
2dx,max initially, we can get an upper bound on u
which monotonically decreases by the execution of the
algorithm. So,

u dx,max +

n
n
+
2
2dx,max

Suppose that the algorithm halts such that > n. It


follows that before the algorithm halts, there must have
been a time when = n. This can be translated into a
lower bound on u via equation (1) as
u>

dx,min n (8dx,min 1)n


+
,
2
2dx,min

which is a contradiction. Because the halting of the algorithm must occur given the monotonically decreasing
number of unsatisfied checks, we have < 2 .

Lemma 5 The Simple Sequential Error Reducing Algorithm can be implemented to run in linear-time.
Proof: [Sketch] The average left and right node degrees
are independent of the code length, and the number of
unsatisfied checks, which is linear in the code length, decreases.

We now give an example application of the Simple Parallel Error Reducing Algorithm.
Lemma 6 Let B be a bipartite graph with n left nodes
and n right nodes for some > 0. The minimum left
node degree is at least 11. Then with probability 1O( n1 ),
B is an (, ) expander for some > 0, 0 and =
9
10 + .
Proof: Similar to the proof of Lemma 3.

9
3
Theorem 7 If B is an irregular (, 10
+ dx,min
) expander
8
and dx,min 9 dx,max where dx,min and dx,max are the
the minimum and maximum degrees on the left nodes of
B, then R(B) is an error reducing code of error reduction
1

2 and reducible distance 2 .

Proof: As before, let n be the number of left nodes. To


prove the result, we show that if we are given a word that
differs from a codeword in R in at most message and
check bits, , n
2 , then the Simple Parallel Error
Reducing Algorithm followed by either repeating until no
message bit with more unsatisfied than satisfied neighbors remains or repeating for log 10 n rounds whichever
9
is smaller outputs a word that differs from the codeword
in R in at most 2 message bits.
Let message and check bits be corrupt, and define
the sets M, N, F, C to be
M = {corrupt message bits that enter a decoding
round}
N = {corrupt check bits that enter a decoding
round}

F = {corrupt message bits that fail to flip after a


decoding round}
C = {uncorrupted message bits that become corrupt after a decoding round}
So = |M | and = |N |, and after a decoding round
the set of corrupt message bits is F C. Observe that if

n
2 2 , then there is a message bit whose value is
flipped by the execution of the algorithm, which follows
directly from the proof of the previous theorem.
Claim: |M C| < n.
The proof follows by contradiction. Suppose that |M
C| n and consider C 0 C such that |M C 0 | = n.
Defining N (A) to be the set of neighbors of a set A,
|N (M C 0 )| > (

9
dx,min + 3)n
10
d

by expansion and since at most x,max


neighbors of C 0
2
are uncorrupted satisfied check bits, we get
|N (M C 0 )| |N (M )| +

dx,max 0
|C | + |N |.
2

This contradicts , n
2 and dx,min
Now since |M C| < n, we get
(

9
10 dx,max .

9
dx,max
dx,min + 3)|M C| < |N (M )| +
|C| + |N |
10
2
d

and at least x,min


|F | |N | edges from F go to uncorrupt
2
satisfied check nodes that have at least one neighbor in
M \F . This implies that

1 dx,min
N (M ) dx,max |M |
|F | |N | .
2
2
9
Combining the above two equations yields ( 10
dx,min +
dx,min
d
3
3)|M C| < dx,max |M | 4 |F | + 2 |N | + x,max
|C|,
2
or

2
3
1
3
dx,max |F |+( dx,max +3)|C| < ( dx,max 3)|M |+ |N |.
9
10
5
2
This implies that
|F C| <

(9dx,max 135)|M | +
10dx,max

135
2 |N |

(2)

|N |
Consider now the two cases, |M | |N|
2 and |M | < 2 .
Substituting for |N | and |M |, respectively, in equation
(2), we get

|N |
2
|N |
|M | <
2
|M |

9
|M |
10
1
|F C| < |N |
2
|F C| <

We know that if the algorithm halts, then < 2 or >


and is initially at most n
2 . The above equations
imply that if the algorithm halts, then
n
2 ,

|M | <

|N |
2

which completed the first half of the proof of the theorem.


If the algorithm does not halt, then after iterating for
log 10 K rounds for some constant K, we get
9

|F C| < max(

n |N |
,
),
2K 2

since |M | n
2 . Choosing K n finishes the proof of
the theorem.

Lemma 8 follows immediately.


Lemma 8 Simple Parallel Error Reducing Algorithm
can be performed by a circuit of linear-size and constant
depth.

III. Encoding and Decoding


Error reducing codes from the previous section do not
necessarily correct all message bits that are corrupted.
However, we can cascade these codes with an errorcorrecting code to construct a code that can correct all
message bits that are corrupted if the number of such bits
is not too large.
Let each graph in the set {Bi } of irregular expander
graphs have i k left nodes and i+1 k right nodes. We
associate each graph with an error reducing code R(Bi )
that has i k message bits and i+1 k check bits, 0
i m. We also use an error correcting code C that
m+2
has m+1 k message bits and 1k check bits. To encode k message bits, apply R(B0 ) to obtain k check
bits. Next, use the k check bits from R(B0 ) as the message bits for R(B1 ) to obtain an additional 2 k check
bits. Repeat this process until we use m+1 k check bits
from R(Bm ) as the message bits for C, obtaining an adm+2
ditional 1 k check bits. The resulting code is a cascade
of the codes R(B0 ), , R(Bm ), and C which we denote
by C(B0 , , Bm , C). The code has k message bits and
m
X
i=1

i k +

m+2 k
k
=
1
1

check bits, and is thus a code of rate 1 .


To decode C(B0 , , Bm , C), we simply decode the
individual codes R(B0 ), , R(Bm ), C in reverse order.
Since the code C corrects both message and check bits,
the check bits of the code R(Bm ) are known and the
message bits of R(Bm ) can be corrected from the algorithms of the previous section. Since the check bits of
the code R(Bm1 ) are known, we can repeat this process up to code R(B0 ), which completes the decoding
of C(B0 , , Bm , C). By choosing a code C that can
be encoded and decoded inquadratic time and choosing m such that m+1 k k, we insure that the code
C(B0 , , Bm , C) can be encoded and decoded in linear
time.
Theorem 9 Let Bi be an irregular (i k, i+1 k, , 34 +
2
dx,min ) expander where dx,min is the minimum degree of

the left nodes of Bi , 0 i m. Let C be an error


m+2
correcting code of m+1 k message bits and 1 k check

bits, m+1 k k, that can correct a random 2dx,max


fraction of errors, where dx,max is the maximum degree
of the left nodes of a Bi . Then C(B0 , , Bm , C) is a rate
1 error-correcting code that can be encoded in linear

time and can correct a random 2dx,max


fraction of errors
in linear time.
Proof: The encoding complexity follows immediately.
The decoding complexity follows from Lemma 5. Random error correction capability follows from Theorem 4.

9
Theorem 10 Let Bi be an irregular (i k, i+1 k, , 10
+
3
8
)
expander,
d

d
,
where
d
and
x,min
x,max
x,min
dx,min
9
dx,max are the minimum and maximum degrees of the left
nodes of Bi , 0 i m. Let C be an error correcting code
m+2
of m+1 k message bits and 1 k check bits, m+1 k

k, that can correct a random 2 fraction of errors. Then


C(B0 , , Bm , C) is a rate 1 error-correcting code that
can be encoded by a linear-size circuit of constant depth
and can correct a random 2 fraction of errors in a linearsize circuit of at most logarithmic depth.

Proof: The encoding complexity follows immediately.


The decoding complexity follows from Lemma 8. The
random error correction capability follows from Theorem
7.

IV. Conclusion
We have constructed a family of linear-time encodable and decodable error-correcting codes that can also
be encoded in constant-time and decoded in at most
logarithmic-time if a linear number of processors are used.
In constructing the codes, we used a constant number
of error reducing codes that are linear-time encodable
and decodable or constant-time encodable and at most
logarithmic-time decodable if linear number of processors
are used.
Our construction combines the ideas from [3] in which
error reducing codes were cascaded differently to consider the worst case scenario or burst errors, and from
[1] in which cascading of codes was used to obtain nearcapacity achieving linear-time encodable and decodable
erasure codes. We are not sure if the irregular graphs of
Theorem 5 that we used in our construction will be of
practical use due to the restriction dx,min 89 dx,max .
References
[1] M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, D.A. Spielman and V. Stemann, Practical Loss-Resilient Codes, in Proc.
29th Symp. on Theory of Computing,1997, pp. 150-159.
[2] M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi and D.A. Spielman, Analysis of Low Density Codes and Improved Designs
Using Irregular Graphs, manuscript.
[3] D.A. Spielman, Linear-Time Encodable and Decodable ErrorCorrecting Codes, IEEE Trans. Inform. Theory,vol. 42, no. 6,
pp. 1723-1731, Nov. 1996.

Das könnte Ihnen auch gefallen