Sie sind auf Seite 1von 23

FreeHDL Compiler Control Data Flow

Graph and its application in waveform


compression
Said Mchaalia
August 18, 2012
1 Intention of this document
In this document, we try to present an idea of using Control Data
Flow Graph (CDFG) to improve waveform compression.
A waveform le is a VCD (Value Change Dump) le, which is divided
in two parts: header that contains general information, which are
version, date,.. and kernel, which contains signal values belong to
each simulation step and simulation time.
Figure ?? describes a VCD le. Notice that each parameter is coded
with an ASCII code composed of 1 to 4 characters depending on the
number of parameters inside considered VHDL model. For each
VHDL model, the simulation time is written before the parameter val-
ues. For example: #0 and #100 represent the simulation times 10
and 100 us. The character b preceded parameter value describes
that this parameter value is a binary format. Thereby all parameter
values will be transformed to a binary format before that they will be
written in the VCD le.
To compress a VCD le, many techniques are developed. The basic
idea of these techniques is inspired from the Lempel-Ziv and the oth-
ers algorithms for data compression [4]. Mainly, ve algorithms that
are Time-Value Separation, Time Compression Technique, Value
Compression Technique, Strength Reduction and Cross Signal Strength
Reduction (see [1] and [2]) are developed to allow a suitable VCD le
compression.
The object of this work is to get out an idea of using CDFG for devel-
oping a new waveform compression algorithm. In following sections,
we will describe this idea and present its features.
$date
Sep 26 2000 16:28:52
$end
$version
FREEHDL 0.1
$end
$timescale
1 us
$end
$scope module struct $end
$var reg 8 ! qsig2[8:1] $end
$var reg 8 " qsig[8:1] $end
$var trireg 1 ? clk $end
$upscope $end
$enddefinitions $end
#0
$dumpvars
b110100 !
b110100 "
b0 ?
#100
b110100 !
b1 ?
$end
Figure 1: VCD File
2
2 An overview of waveform compression technique
The basic idea of waveform compression is nding a method that
reduces waveform le sizes in order to get more storage space for
further use.
The actual waveform compression techniques are described in [1]
and [2].
3 Control Data Flow description
The Control Data Flow Graph is a graphical representation, which
identies each statement in a given programwith a graph. A CDFGis
composed of edges, which represent data and/or control and nodes
that identify arithmetic and/or logic operations[3]. Figure ?? repre-
sents a CFDG of VHDL model.
4 CDFG and its application in waveform compression
The object of using CDFG is to develop a compression algorithm
based on interpreted graphical information. In this section we detail
the different waveform compression cases in which the CDFG will be
used.
4.1 Dependency Schedules
In this section, we will deal with the dependency between signals
and/or variables and dependency between processes. This section
is divided in two main subsection: however the rst section treats
the case of the dependencies between signals and/or variables, the
second one interests the dependency between processes.
4.1.1 Signal dependency schedules
We know that CDFG gives a knowledge a priori of relationships be-
tween signals and variables inside a given VHDL model. Due to this
information, we can minimize the data storage. For example let con-
sider the VHDL model described in gure ??. We see, that the signal
c is depended on whether the signals a and/or b change values or
not. So, we can use such an information to reduce the stored infor-
mation belong to considered waveform. Thereby, we do not need to
store the c signal value and we should just store the a and b signal
values. In the other hand, notice that the signal d is depended on
3
process(a,b)
begin
c <= a xor b;
d <= c and a;
end process;
Figure 2: VHDL model 1
Figure 3: CDFG of VHDL model 1
whether the signal a an/or the signal c change values or not. Due
to this relationship between signals and the principles of the VHDL
language, we do not need to store the d signal value.
So, a dependency knowledge a priori checks whether to store a sig-
nal value or not.
Indeed, in gure ?? we can predicate the output signal value through-
out the knowledge of the operand signal values. The CDFG of the
VHDL model described in gure ?? is presented in gure ??. We
see that the CDFG shows the dependencies between all considered
signals in this process.
So, we do not need to store the d and c signal values due to the de-
pendencies between those signals and the signals a and b. The idea
is to search a manner in which we must store information describ-
ing the dependencies between those signals. This can be solved by
storing these operations:
c = a XOR b (1)
and
d = c AND a (2)
that attach each set of signals to the other. The rst operation (equa-
tion (1)) describes the relationship between the signals a, b and c
and the second one (equation (2)) describes the relationship be-
tween the signals a, c and d. Notice that the storage of these equa-
tions should be done just once a time for each waveform compres-
sion process.
Let see, what will happen when we consider a VHDL model, which
depends on clock signal. Figure ?? shows the previous example in
the case where the process P depends on a clock signal clk.
We see that in the VHDL model of gure ??, the considered process
depends whether the clock signal clk changes value or not to as-
sign signal values to c and d. Although, signals c and d will change
4
process
begin
if clkevent and clk = 1 then
c <= a xor b;
d <= c and a;
end if;
end process;
Figure 4: VHDL model 1
Figure 5: VHDL signal assignment procedure
value with a cycle delay of clock signal compared to signals a and
b. Figure ?? describes the signal assignment structures for VHDL
language in the case of ideal electrical circuits without any compo-
nent delays. We notice that there is a clock cycle delay between the
two assignments and the a and b signal values. I.e. that the signal
assignment will be achieved just after a delay of clock signal cycle.
In order to make synthesizable models and due to the VHDL lan-
guage structure in the signal assignment, we notice that the previous
relationship between c, a and b, and d, c and a will not be valid in
the case of this VHDL model that depends on clock signal. There-
fore, we must dene new relationships between these signals, which
depend on the clock signal clk.
Figure ?? represents the CDFG of the VHDL model shown in gure
??. Notice that this graphical representation describes the relation-
ships between all signals in this VHDL model. Thereby, the incoming
data edges a and b of the XOR node come from the Read Signal
node, which represents the source of out-coming data edges, how-
ever the out-coming data edge c is an incoming edge of the Write
Signal node, which represents the source of incoming data edges.
Between Read Signal node and Write Signal node there is just
one clock signal cycle delay. After a clock signal cycle, the value of
signal c will be transfered to Read Signal node. On the other hand,
the incoming data edges a and c of the AND node are coming from
Read Signal node. This means that there is a clock signal cycle
delay between these signals and the out-coming data edge d of this
node. Notice that the dependency of these nodes on the clock sig-
nal is characterized with control edges true and false coming from
the Select node that is activated by the clock event node.
Figure 6: CDFG of VHDL Model 2
5
Let us consider the new relation as follows:
c
(n)
= f(a
(n1)
, b
(n1)
) (3)
where f(x, y) a logical function, which is dened as follows:
f(x, y) = x XOR y (4)
as the rst relation between signals c, a and b.
d
(n)
= f(c
(n1)
, a
(n1)
) (5)
where f(x, y) a logical function, which is dened as follows:
f(x, y) = x AND y (6)
as the second relation between signal d, a and c.
Where n is the nth clock signal (clk) cycle.
4.1.2 Process dependency schedules
In this section, we will try to give an idea how the CDFG can solve
the dependencies between processes inside the same VHDL model
and how which information will be used in the waveformcompression
techniques.
Let us look at the example of gure ??, we see that the three pro-
cesses P
1
, P
2
, and P
3
are depended each of the other. Based on
the CDFG of each process and on the knowledge that all processes
inside the same VHDL model have the same Read Signal and
Write Signal nodes, we can involve some relationships between
signals inside different processes to make a suitable optimization of
the waveform compression.
Figure ?? describes the CDFG of the VHDL model represented in
gure ??. We see that, the CDFGs allow us to get information about
the dependencies between signals inside theses processes and so
to achieve the needed optimization, which is the storage of signal a
and all relationships between considered signals such as:
b = f(a, 1) (7)
where f(x, y) a logical function, which is dened as follows:
f(x, y) = x XOR y (8)
6
P1: process(a)
begin
b <= a xor 1;
end process;
P2: process(b)
begin
c <= b nor 1;
end process;
P3: process(c)
begin
d <= c xor 1;
end process;
Figure 7: VHDL model 3
Figure 8: CDFG of VHDL Model 3
c = f(b, 1) (9)
where f(x, y) a logical function, which is dened as follows:
f(x, y) = x NOR y (10)
d = f(c, 1) (11)
Where f(x, y) a logical function, which is dened as follows:
f(x, y) = x XOR (12)
Above, we presented the case when the processes depend on sig-
nals. Let us look, what will happen when there are processes, which
depend on clock signal inside a considered model.
The simple case of processes that depend on clock signal is
the case when all processes have the same clock signal and
are activated on the same clock signal cycle. This case repre-
sents a composition of the process dependencies and depen-
dency on clock signal. So, the relationship will described as
followed:
7
S
(n+1)
= f(S
i
(n)
, S
j
(n)
)) (13)
Where f(., ., .) represents a mathematical multi-variable func-
tion.
The more complex case is the case where each process has
its own clock signal. In this case, we must dene a relation-
ship between all clock signals, and try to write the other clock
signals function of the rst one for example. If we suppose
for example that the clock signal clk
k
is slower k times than
the clock signal clk
0
, and the clock signal clk
p
is p times faster
than signal clock clk
0
, we can write the following relationships
between the distinguished clock signals:
clk
k
= k clk
0
(14)
clk
p
= 1/p clk
0
(15)
The relationships between signals inside theses processes will
be dened as follows:
S
l0
(n)
= f
0
(S
i0
(n1)
, S
j
k
(E((n1)/k))
, S
hp
(p(n1))
) (16)
Where E(x) is a function, which is dened as follows:
E(x) : IR > IN : x | > n such as : n 1 < x <= n (17)
where IR is the real sets and IN is the natural number sets.
S
lk
(n)
= f
k
(S
i0
(k(n1))
, S
j
k
(n1)
, S
hp
(kp(n1))
) (18)
S
lp
(n)
= f
p
(S
i0
(E((n1)/p))
, S
j
k
(E((n1)/(pk)))
, S
hp
(n1)
) (19)
Figure ?? represents a VHDL model, where considered processes
inside this model are depended on different clock signals.
The CDFG of the VHDL model described in gure ?? is shown in
gure ??. Notice that in the VHDL model of gure ??, the three
clock signals clk
0
, clk
1
and clk
2
are distinct. So, to get a relationship
between signals a, b, c and d, we must dene relations between the
clock signals.
8
P1: process
begin
if clk0rising_edge then
b <= a xor 1;
end if;
end process;
P2 : process
begin
if clk1rising_edge then
c <= b nor 1;
end if;
end process;
P3 : process
begin
if clk2rising_edge then
d <= c xor 1;
end if;
end process;
Figure 9: VHDL model 4
Figure 10: CDFG of VHDL Model 4
9
Suppose for example that:
clk
1
= 3 clk
0
(20)
and
clk
2
= 0.5 clk
0
(21)
To involve the relations between signals that are dened previously,
we can consider these equations:
b
(n+1)
= a
(n)
XOR 1 (22)
Where n is the nth clock signal clk
0
cycle.
c
(k+1)
= b
(k)
NOR 1 (23)
Where k is the kth clock signal clk
1
cycle.
d
(m+1)
= c
(m)
XOR 1 (24)
Where m is the mth clock signal clk
2
cycle.
Indeed, we can use these equations to minimize data storage. Thereby,
we can just store the signal value a and these relations between dif-
ferent clock signals clk
0
, clk
1
and clk
2
, and signals a, b, c and d.
Notice that these relations will be stored just only once a time dura-
tion the VHDL model simulation.
4.2 CDFG and Waveform Decompression
In this section, we will describe how the CDFG will be used in the
complex cases to improve the waveform compression techniques.
4.2.1 Compromise between decompression and signal depen-
dencies
In the previous section, we treated the case of signal dependencies
and the involved solution based on the CDFG research. In section
4.1.1, we developed an algorithm that searches whether a signal will
be stored or not. Indeed, the storage of signal values needs a knowl-
edge a priori of the signal dependencies. Thereby, a signal value will
be stored only when this signal is independent from all other sig-
nals dened inside a given VHDL model. On the other hand, section
4.1.1 involves that only the relationship in their mathematical forms
10
between signals will be stored, because this will be done just once
a time during a simulation process of waveform compression. Al-
though, it is hard to achieve this task during the decompression pro-
cess. Thereby, we do not know exactly the complexity of considered
mathematical functions, which link the signals to each other. In the
most case of complex VHDL models, we have mostly very complex
mathematical functions that belong a set of signals to a dened sig-
nal. This will make the process of decompression very complex and
it will take a great schedule time.
To solve this problem, we try to nd an optimized solution for the
waveform compression techniques based on the signal dependen-
cies.
This solution is described as follows:
in the case of a signal that depends of a set of signals: nd an
algorithm that searches to store all needed signals based on
operation numbers that identify the signal values. Thereby, let
consider the signal S
k
that depends on a set of signals. This
signal will be written as follows:
S
k
= f
k
(S
p
0
, ..., S
p
i
, S
j
, ..., Sn) (25)
where S
p
0
, ..., S
p
i
are independent or primery signals and S
j
, .., S
n
are signals that depend of each other and the other signals.
For example, let consider the following signal dependencies:
S
j
= f
j
(S
1
, .., S
l
) (26)
S
m
= f
m
(S
3
, S
2
) (27)
S
l
= f
l
(S
0
) (28)
S
n
= f
n
(S
0
, S
h
, S
m
) (29)
In section 4.1.1, we will just store the signals that do not de-
pend of any other signal. For example S
p
0
, ..., S
p
i
. There-
fore, the decompression process complexity will depend on the
complexity of the functions f
0
(), ..., f
n
(). Notice that these func-
tions can be as complex as possible and so the decompression
task will be so complex. To solve this problem, we can precede
as follows:
rst, store all independent signals.
11
Figure 11: Complexity of dependency functions function of signal
numbers
secondly, in the case of dependent signals: see whether
the dependency function f
i
() : i = 1...n and i <> k
is complex or not. If it is simple then do not store the
corresponding dependent signal S
i
. Figure ?? represents
the graph of the complexity of dependency functions.
In gure ??, we see that the complexity C
ij
of dependency
function f
i
() depends on the number of signals j that are con-
tained in this function. If this number is higher than the com-
plexity is lower and vice versa. This means that, in this case
of dependent signals, we must search the signal number to be
stored in order to reduce the complexity of the decompression
process. In the gure ??, we see that for a given dependency
functions that depend on m signal, we can nd the suitable
signal number k such as k < m with which we can get an
optimized complexity of all considered dependency functions.
So, our object is to develop an algorithm that returns the signal
number k, which their values will be stored to allow a simple
restoration of the other signals and to identify the signals that
must be stored.
Thirdly, optimize the compromise between the storage space of
signal values and the complexity of schedule time needed for
the decompression process. As, we describe above, we must
nd the signal number k that allows a suitable complexity of all
considered dependency functions inside a given VHDL model.
Although the research of this number is so complex and in the
most time we cannot nd just a number but an interval. In this
case we must resolve the solution of nding this signal number
k based on other information, which is the reduction of needed
storage space of signal values. Throughout these information,
we must nd a signal number k inside the considered interval
and which gives a less need of storage space of signal values.
Finally, store all needed signal and required dependency func-
tions.
Problem formulation: The problem we are going to consider may
be formulated as follows:
For a given signal S
k
, which is written in the form:
S
k
= F
k
(S
0
, ..., S
i
)i = 0 n (30)
12
minimize the dependency function complexity: C
ki
, which should
be written in the following form:
C
k
=

i
C
ki
=

i,j

i
w
i
C
ij
(31)
where k, i = 0...n, n is the number maximal of signals, f(., .) a
function to dene, and:

j
is dened as follows:

j
=
_
0 : S
j
value not stored in DDB
1 : otherwise
(32)
w
i
is the weight of operation i contained inside the dependency
function f
i
(). It is dened as follows:
w
i
=
_

_
0 : S
i
independent
1 : f
i
() = NOT
2 : f
i
() = XOR
: so on
(33)
Let consider the signal sets S = {S
0
, , S
n
} as information model,
which will be identied. It looks like that our VHDL model is a source
of signals, which will be produced each simulation time t
i
with distin-
guished probabilities p
i
. These signals are dependent one of each
other and they are correlated between them. Where each signal S
i
represents an information to be stored or not. Thereby, our problem
will be considered as a Lossless Data Compression Problem that
searches to encode each signal S
i
with a number of bits and then
optimizes the number of the signal values, which must be stored for
further use and that allows a suitable restoration of the all signals
during the decompression process.
On the other hand, for a given signal sets S = {S
0
, , S
n
}, we con-
sider the probability of a signal S
i
, i = 0 n, p
i
to be appear in the
VCD le during a simulation process and the probability p
i/j
i <>
j&i, j = 0 n as the probability of the dependency between signals
S
i
and S
j
. The probability p
i
of signal S
i
. is dened as follows:
p
i
=

i
SS
i
n

j=0

j
SS
j
(34)
where
i
is the occurrence frequency of the signal S
i
and SS
i
is
the storage space needed for storing the signal S
i
. It means, when
13
a signal S
i
appears just once a time during simulation process,
i
will have the value one and so on. Although, the probability p
i/j
of dependency between the signals S
i
and S
j
(or the conditional
probability) is dened as follows:
p
i/j
=

j
SS
j

i
SS
i
=

j
SS
j
p
i

n

l=0

l
(35)
where
j
is the number of times in which the S
j
belongs to signal S
i
.
I.e. the number of times that the signal S
j
is dened as a parameter
of function f
i
().
The problem we are dealing with can be dened as an optimization
problem. So, mathematically, this problem will be written as follows:
OP :
_

_
min
m
(
1
m

i,j
C
ki
)
min
m
(
1
m

i
p
i
)
(36)
From this optimization problem, we can say that the needed storage
space and the dependency function complexity are inversely (con-
versely) proportional. So, we can dene a new space composed of
the complexity and the probability as shown in gure ??. This allows
us to consider the function to minimize as follows:
min
k
(
1
k

d
im
2
) (37)
where d
im
is the distance between the points X
min
(p
m
, C
km
) and
X
i
(p
i
, C
ki
):
d
im
= d(X
i
, X
m
) =

(
p
i
p
m
A
)
2
+ (
C
ki
C
km
B
)
2
(38)
where A and B are constants for scaling, and the distance d
m
is
dened as follows:
d
m
= min
i
(d
i
) d
i
=

(
p
i
A
)
2
+ (
C
ki
B
)
2
(39)
The X
min
(p
m
, C
km
) is chosen with the manner that S
m
= f
m
(S
j
)
j

{0 n} is composed of independent signals only. This means that
all S
j
are independent.
14
So, the problem we are dealing with can be written as follows:
min
l
(
1
l

(p
i
p
m
)
2
+ (C
ki
C
km
)
2
) (40)
This problem can be transformed to a quadratic optimization problem
in the following form:
k {i h} X
k
=
_
_
_
_
_
_
_
_
_
_
_
p
i
.
.
.
p
k
C
ik
.
.
.
C
kk
_
_
_
_
_
_
_
_
_
_
_
min
k
(
1
k
(X
T
k
M
k
X
k
+V
k
X
k
+K
k
))(41)
where :
M
k
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
A
2
0 0
0
1
A
2
0 0
.
.
.
.
.
.
.
.
.
0 0
1
A
2
_
_
_
_
_
_
0 0
0 0
_
_
_
_
_
_
1
B
2
0 0
0
1
B
2
0 0
.
.
.
.
.
.
.
.
.
0 0
1
B
2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
the matrix of quadratic optimization. This matrix is (2k, 2k) di-
mension. I.e M
k
R
2k2k
. We can demonstrate that the
matrix M
k
is symmetric and positive denite, which means
M
T
k
= M
k
and x R
2k1
: x
T
M
k
x > 0.
V
k
= 2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
pm
A
.
.
.
pm
A
_
_
_
_
_
_
_
C
mk
B
.
.
.
C
kk
B
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
, the vector of quadratic optimization.
The length of this vector is 2k.
K
k
=
_
_
_
_
_
_
_
_
_
_
_
_
_
p
2
i
A
2
.
.
.
p
2
k
A
2
C
2
km
B
2
.
.
.
C
2
kk
B
2
_
_
_
_
_
_
_
_
_
_
_
_
_
is a constant vector. The length of this vector
15
Figure 12: Considered Space
Figure 13: Variation of the chance of getting optimal solution
is 2k.
The solution of considered problem consists to nd a set of signals
X
m
= {S
i
}
i{jm}
where j < m < n, which satisfy the condition
below. Our object is to dene the convenient signal sets with which
we have an optimal storage space that associates an optimal com-
plexity.
So, our new problem will be formulate as follows:
m 0 kfind X
m
= S
ii{jm}
j < m < n (42)
that succeeds:
m X
m
= {S
i
}
i{0m}

_
_
_
min
k=0n
i=0n
(
1
m

C
ki
) = (
1
m

C
ki
)
opt
min
m
(
1
m

p(S
i
)) = (
1
m

p
i
)
opt
(43)
To resolve this problem, we try to consider the optimal signal set
as X
opt
, which has a length (number of signals inside) n
opt
such as
n
opt
< n where n is the total signal number inside the considered
VCD le.
To search X
nopt
, we need to consider a discrete probability problem,
which is dened as follows:
for each m {0 k} we have a chance probability to get
a solution of our OP problem. This solution can be written
as follows: (
1
m

C
ik
,
1
m

p
i
). Let associate to this solution a
chance probability q
m
that satises 0 < q
m
< 1.
we know that for our n
o
pt we have the best chance to get the
optimal solution (
1
nopt

C
ik
,
1
nopt

p
i
)of our OP solution. So,
m {0 k} we have the following condition to be veried :
1 > q
nopt
q
m
> 0. The Figure ?? shows the variation of the
chance to get an optimal solution function of the considered
signal sets.
We can assume that the variation of the chance of getting an optimal
solution is a distribution. So, we can dene this distribution, which
will identify the relationship between the chance probability to get an
optimal solution an the signal sets. This distribution can be written
as follows:
16
q(X
i
) = exp(
(X
i
X
nopt
)
T
(X
i
X
nopt
)

) + (44)
where , and are constants.
We note that the peak of the chance is found at the point X
nopt
.
Although, we cannot demonstrate that the considered distribution or
density is not symmetric around X
nopt
.
If we assume that this density is symmetric around X
nopt
, we can
demonstrate that it will be written as follows:
q(X
i
) = f
Xn
opt
,M
k
(X
i
) =
1
(2)
k
_
det(M
k
)
exp(
(X
i
X
nopt
)
T
M
1
k
(X
i
X
nopt
)
2
) (45)
Proof: to prove the equation above, just consider the case of the
p-variate normal distribution for multivariate with p = 2k [5].
So, we can use this equation as a constraint for the optimization
problem dened in equation (41). Always, we can x the chance to
obtain an optimal solution of considered optimization problem, and
than search the corresponding signal sets which succeeds consid-
ered chance probability.
For example is we would like that the chance of getting an opti-
mal solution must be greater than 0.7, we can consider the following
quadratic optimization problem:
_

_
min
k
(
1
k
(X
T
k
M
k
X
k
+V
k
X
k
+K
k
))
1
(2)
k

det(M
k
)
exp(
(X
k
Xn
opt
)
T
M
1
k
(X
k
Xn
opt
)
2
) 0.7
(46)
where k represents the length of the vector X
k
=
_
_
_
S
i
.
.
.
S
k
_
_
_ that must
be identied.
4.2.2 One signal restoration
In this section we try to treat the case of restoration of one signal from
a signal lists that have been compressed. We know that considered
signal is combination of a signal sets. So, to restore this signal we
need to calculate the function link this signal to all other signals. In-
deed the complexity of the restoration depends on the calculation of
17
these signals. This can be so complex so possible. To reduce this
complexity, we need to reduce the complexities of the calculation of
all signal that are dependent of each other. This means that the
restoration depends on the operation number needed to calculate all
required signals.
S
n
= F
n
(S
0
, , S
n1
) (47)
where
S
i
= F
i
(S
0
, , S
k
) i, k = 0 n 1 (48)
So, to calculate F
n
() we need rst to calculate F
i
(). So, the com-
plexity of the function F
n
(), will be calculated based on the function
complexities F
i
():
C
n
=
n

i=0
w
i
C
i
(49)
where w
i
represents the weight of the operation belong to signal S
i
This problem can be transformed to an optimization problem, which
is
min
n
(
n

i=0
w
i
C
i
) (50)
4.2.3 Data base optimization
In this section we try to give an overview about the case where the
same signal depends on itself for each clock cycle and/or on signal,
which depends on clock cycle. Although the simplicity of the sim-
ulation of the relationship between considered signal and the other
associated signal, we have to solve the problem of storage space
optimization.
Let look at the example of gure ??. We see that the signal a de-
pends on the clock signal. It changes value for each clock signal
cycle.
In the example of gure ??, we see that the signal a will change
value each cock signal cycle. So, if we like to store the signal a,
we will need more storage space. Mathematically, we can write the
following relation between the clock signal clk and the signal a
a
(n)
= a
(n1)
XOR 1 (51)
where n is the nth clock signal cycle.
18
P: process
begin
if clkrising_edge then
a <= a xor 1;
end if;
end process;
Figure 14: VHD model 5
Notice that this equation allows us to restore the signal a at the clock
signal cycle n base on the value of this signal at the clock signal
cycle (n1). So, we can just store the value of this signal at the rst
clock signal cycle and then we can calculate the value of considered
signal recursively. Although the problem is that this calculation of the
value of signal a at the clock signal cycle n is so complex and need
a great schedule time.
To solve this compromise between these problems, we need to nd a
solution that reduces the schedule time needed for the restore of the
signal a at the clock signal cycle n, when n is so great so possible
and the storage space needed for the storage of signal a each clock
signal cycle.
Problem formulation: The problem we are going to consider may
be formulated as follows:
n IN : S
(n)
= f
n
(S
(n1)
) (52)
nd
n
opt
=
_
min
n
((n 1) SS(a))
min
n
((n 1) C
0
)
(53)
where SS() is the storage space needed to store a signal value.
This can be transformed as follows:
p IN : n = ip where 1 i p (54)
such as
n, p IN : S
(n)
= f
np
(S
(p)
) (55)
For each clock signal cycle n, we will try to nd an integer p, that
allows to store the signal value S
(ip)
that minimize the time schedule
19
Figure 15: Software Design
needed for the restoration of signal value S
(n)
. Arbitrary, we can
choose p as follows:
p IN : p 1
n
10
p i = 1 9 (56)
Schedule time analyses: In this paragraph, we prove our choice
of the value of p. Let consider the case, when all values of the signal
a will be stored. So, we need (n 1) size(a) storage space to
store all values of signal a. However, if we store just the value of a
at the clock signal cycle 0, we will need (n 1) C
0
schedule time
to restore the value of signal a at the clock signal cycle n. Indeed,
if we apply the developed algorithm, we need just p C
0

n
10
C
0
schedule time to restore the a signal value at the clock signal cycle
n and just 9 size(a) storage space. So, we have to reduce the
number of schedule time of 10 and the number of storage space of
10 too.
4.3 Waveform Compression Software Design
In this section an overview of the developed software design is pre-
sented. Figure ?? represents the general aspect of this software.
FreeHDL Simulator: its task is to simulate a given VHDL design
in order to create the VCD and DDB les for further use.
CDFG Simulator: the CDFG simulator allows to create the
Control Data Flow Graph of considered VHDL model and write
it in a specic le.
Waveform Compressor: its task is the compression of the VCD
le based on the CDFG of considered model. The obtained le
will be stored for further use.
Waveform Decompressor: realizes the decompression of com-
pressed VCD le to view it.
Waveform viewer: allows the viewing of the decompressed
VCD le in order to get out the verication results of consid-
ered VDHL design.
Figure ?? represents the software module designs in detail.
Waveform Compressor: the waveform Compressor is com-
posed of two basic modules:
20
WaveformCompression module: this module interests the
compression techniques. Three compression modules
are employed:
Header le compression module: as it is described
above, the compression of the VCD header le is
done independently of the Control Data Flow Graph.
Simulation Time Compression module: this module
interests the compression of the signal ids through-
out the knowledge of considered clock cycle and sig-
nal events. It returns the signal ids, which must be
compressed and stored.
Parameter value compression module: this module
realizes the compression of considered parameter val-
ues inside the given VCD le based on the informa-
tion returned from Dependency Scheduler and VCD
le. It returns the signal values, which must be com-
pressed and stored.
Dependency Scheduler module: this module treats the
dependency of signals and processes based on the de-
veloped CDFG of the VHDL design. This module has to
control whether a given signal Id and value will be com-
pressed and stored or not. So, it controls the Parameter
Value Compression and the Simulation Time Compres-
sion modules. Thereby, it gives the needed information to
the Simulation Time module about which signal Ids that
must be compressed and these to the Parameter Value
Compression about which signal and variable values must
be compressed and stored. On the other hand, it returns
the dependency between signals and variables in math-
ematic forms to the Waveform Compression module to
be stored. Its input comes from the CDFG Scanner and
Parser.
CDFG Scanner and Parser module: this module translates the
CDFG, which is written in a given le to a data base inside the
memory for further need. Its inputs are coming from the DDB
and CDFG les.
Viewer: the viewer has to get out the verication results of con-
sidered VHDL model. Basically it consists of two modules :
Waveform Decompressor: it is composed of four mod-
ules, which are needed for the decompression of com-
pressed VCD le.
21
Figure 16: Software Module Designs
Header File Decompression module: this module is
used to decompress the VCD header le.
Simulation Time Decompression module: this mod-
ule allows the decompression of the Simulation Time
based on the information returned fromthe compressed
VCD le and the Dependency Simulator.
Parameter Value Decompression module: this mod-
ule decompresses all parameter values based on the
information stored in the compressed VCD le and
those returned from the Dependency simulator.
Dependency Simulator module: this module allows to
simulate the stored dependencies between parame-
ters and returns either the parameter value or the the
Simulation Time.
waveform viewer: simulates decompressed VCD le to
get out the Verication results of VHDL model.
5 Time Schedule
In this section we presents the needed time to achieve each module
of this software.
VCD header le compression: already done.
Simulation time compression: one month.
Parameter value compression: one month.
Dependency scheduler: two months.
CDFG scanner and parser: already done.
VCD header le decompression: one month.
Simulation time decompression: two months.
Dependency simulator: three months.
Parameter value decompression: two months.
Waveform viewer: using Dinotrace. Already done.
22
6 Conclusion
In this document, we presented an idea of using CDFG to improve
the developed waveform compression technique. Two cases for us-
ing CDFG in the waveform compression are developed: simple case,
which interests the signal and process dependencies, and complex
case that interests the other VHDL statements. We can conclude
that the CDFG advantages are to allow a rapid interpretation of the
information inside a VHDL model throughout a graphical represen-
tation.
7 Reference
[1] E. Naroska, A Novel Approach for Digital Waveform Compres-
sion.
[2] E. Naroska, Waveform Compression Technique.
[3] E. Naroska and S. Mchaalia, Control Data Flow Graph for Free-
HDL compiler.
[4] J. Ziv and A. Lempel,A Universal Algorithm for Sequential Data
Compression, IEEE Transaction on Information Theory, Vol. IT-23,
No.03, May 1997.
[5] The multivariate Normal Distribution, basic course.
List of Figures
Figure ?? : VCD File.
Figure ?? : VHDL Model 1.
Figure ?? : CDFG of VHDL Model 1.
Figure ?? : VHDL Model 2.
Figure ??: VHDL Signal Assignment procedure.
Figure ?? : CDFG of VHDL Model 2.
Figure ?? : VHDL Model 3.
Figure ?? : CDFG of VHDL Model3.
Figure ?? : VHDL Model 4.
Figure ?? : CDFG of VHDL Model 4.
Figure ?? : Software Design.
Figure ?? : Software Module Designs.
23

Das könnte Ihnen auch gefallen