Beruflich Dokumente
Kultur Dokumente
i
C
ki
=
i,j
i
w
i
C
ij
(31)
where k, i = 0...n, n is the number maximal of signals, f(., .) a
function to dene, and:
j
is dened as follows:
j
=
_
0 : S
j
value not stored in DDB
1 : otherwise
(32)
w
i
is the weight of operation i contained inside the dependency
function f
i
(). It is dened as follows:
w
i
=
_
_
0 : S
i
independent
1 : f
i
() = NOT
2 : f
i
() = XOR
: so on
(33)
Let consider the signal sets S = {S
0
, , S
n
} as information model,
which will be identied. It looks like that our VHDL model is a source
of signals, which will be produced each simulation time t
i
with distin-
guished probabilities p
i
. These signals are dependent one of each
other and they are correlated between them. Where each signal S
i
represents an information to be stored or not. Thereby, our problem
will be considered as a Lossless Data Compression Problem that
searches to encode each signal S
i
with a number of bits and then
optimizes the number of the signal values, which must be stored for
further use and that allows a suitable restoration of the all signals
during the decompression process.
On the other hand, for a given signal sets S = {S
0
, , S
n
}, we con-
sider the probability of a signal S
i
, i = 0 n, p
i
to be appear in the
VCD le during a simulation process and the probability p
i/j
i <>
j&i, j = 0 n as the probability of the dependency between signals
S
i
and S
j
. The probability p
i
of signal S
i
. is dened as follows:
p
i
=
i
SS
i
n
j=0
j
SS
j
(34)
where
i
is the occurrence frequency of the signal S
i
and SS
i
is
the storage space needed for storing the signal S
i
. It means, when
13
a signal S
i
appears just once a time during simulation process,
i
will have the value one and so on. Although, the probability p
i/j
of dependency between the signals S
i
and S
j
(or the conditional
probability) is dened as follows:
p
i/j
=
j
SS
j
i
SS
i
=
j
SS
j
p
i
n
l=0
l
(35)
where
j
is the number of times in which the S
j
belongs to signal S
i
.
I.e. the number of times that the signal S
j
is dened as a parameter
of function f
i
().
The problem we are dealing with can be dened as an optimization
problem. So, mathematically, this problem will be written as follows:
OP :
_
_
min
m
(
1
m
i,j
C
ki
)
min
m
(
1
m
i
p
i
)
(36)
From this optimization problem, we can say that the needed storage
space and the dependency function complexity are inversely (con-
versely) proportional. So, we can dene a new space composed of
the complexity and the probability as shown in gure ??. This allows
us to consider the function to minimize as follows:
min
k
(
1
k
d
im
2
) (37)
where d
im
is the distance between the points X
min
(p
m
, C
km
) and
X
i
(p
i
, C
ki
):
d
im
= d(X
i
, X
m
) =
(
p
i
p
m
A
)
2
+ (
C
ki
C
km
B
)
2
(38)
where A and B are constants for scaling, and the distance d
m
is
dened as follows:
d
m
= min
i
(d
i
) d
i
=
(
p
i
A
)
2
+ (
C
ki
B
)
2
(39)
The X
min
(p
m
, C
km
) is chosen with the manner that S
m
= f
m
(S
j
)
j
{0 n} is composed of independent signals only. This means that
all S
j
are independent.
14
So, the problem we are dealing with can be written as follows:
min
l
(
1
l
(p
i
p
m
)
2
+ (C
ki
C
km
)
2
) (40)
This problem can be transformed to a quadratic optimization problem
in the following form:
k {i h} X
k
=
_
_
_
_
_
_
_
_
_
_
_
p
i
.
.
.
p
k
C
ik
.
.
.
C
kk
_
_
_
_
_
_
_
_
_
_
_
min
k
(
1
k
(X
T
k
M
k
X
k
+V
k
X
k
+K
k
))(41)
where :
M
k
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
A
2
0 0
0
1
A
2
0 0
.
.
.
.
.
.
.
.
.
0 0
1
A
2
_
_
_
_
_
_
0 0
0 0
_
_
_
_
_
_
1
B
2
0 0
0
1
B
2
0 0
.
.
.
.
.
.
.
.
.
0 0
1
B
2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
the matrix of quadratic optimization. This matrix is (2k, 2k) di-
mension. I.e M
k
R
2k2k
. We can demonstrate that the
matrix M
k
is symmetric and positive denite, which means
M
T
k
= M
k
and x R
2k1
: x
T
M
k
x > 0.
V
k
= 2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
pm
A
.
.
.
pm
A
_
_
_
_
_
_
_
C
mk
B
.
.
.
C
kk
B
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
, the vector of quadratic optimization.
The length of this vector is 2k.
K
k
=
_
_
_
_
_
_
_
_
_
_
_
_
_
p
2
i
A
2
.
.
.
p
2
k
A
2
C
2
km
B
2
.
.
.
C
2
kk
B
2
_
_
_
_
_
_
_
_
_
_
_
_
_
is a constant vector. The length of this vector
15
Figure 12: Considered Space
Figure 13: Variation of the chance of getting optimal solution
is 2k.
The solution of considered problem consists to nd a set of signals
X
m
= {S
i
}
i{jm}
where j < m < n, which satisfy the condition
below. Our object is to dene the convenient signal sets with which
we have an optimal storage space that associates an optimal com-
plexity.
So, our new problem will be formulate as follows:
m 0 kfind X
m
= S
ii{jm}
j < m < n (42)
that succeeds:
m X
m
= {S
i
}
i{0m}
_
_
_
min
k=0n
i=0n
(
1
m
C
ki
) = (
1
m
C
ki
)
opt
min
m
(
1
m
p(S
i
)) = (
1
m
p
i
)
opt
(43)
To resolve this problem, we try to consider the optimal signal set
as X
opt
, which has a length (number of signals inside) n
opt
such as
n
opt
< n where n is the total signal number inside the considered
VCD le.
To search X
nopt
, we need to consider a discrete probability problem,
which is dened as follows:
for each m {0 k} we have a chance probability to get
a solution of our OP problem. This solution can be written
as follows: (
1
m
C
ik
,
1
m
p
i
). Let associate to this solution a
chance probability q
m
that satises 0 < q
m
< 1.
we know that for our n
o
pt we have the best chance to get the
optimal solution (
1
nopt
C
ik
,
1
nopt
p
i
)of our OP solution. So,
m {0 k} we have the following condition to be veried :
1 > q
nopt
q
m
> 0. The Figure ?? shows the variation of the
chance to get an optimal solution function of the considered
signal sets.
We can assume that the variation of the chance of getting an optimal
solution is a distribution. So, we can dene this distribution, which
will identify the relationship between the chance probability to get an
optimal solution an the signal sets. This distribution can be written
as follows:
16
q(X
i
) = exp(
(X
i
X
nopt
)
T
(X
i
X
nopt
)
) + (44)
where , and are constants.
We note that the peak of the chance is found at the point X
nopt
.
Although, we cannot demonstrate that the considered distribution or
density is not symmetric around X
nopt
.
If we assume that this density is symmetric around X
nopt
, we can
demonstrate that it will be written as follows:
q(X
i
) = f
Xn
opt
,M
k
(X
i
) =
1
(2)
k
_
det(M
k
)
exp(
(X
i
X
nopt
)
T
M
1
k
(X
i
X
nopt
)
2
) (45)
Proof: to prove the equation above, just consider the case of the
p-variate normal distribution for multivariate with p = 2k [5].
So, we can use this equation as a constraint for the optimization
problem dened in equation (41). Always, we can x the chance to
obtain an optimal solution of considered optimization problem, and
than search the corresponding signal sets which succeeds consid-
ered chance probability.
For example is we would like that the chance of getting an opti-
mal solution must be greater than 0.7, we can consider the following
quadratic optimization problem:
_
_
min
k
(
1
k
(X
T
k
M
k
X
k
+V
k
X
k
+K
k
))
1
(2)
k
det(M
k
)
exp(
(X
k
Xn
opt
)
T
M
1
k
(X
k
Xn
opt
)
2
) 0.7
(46)
where k represents the length of the vector X
k
=
_
_
_
S
i
.
.
.
S
k
_
_
_ that must
be identied.
4.2.2 One signal restoration
In this section we try to treat the case of restoration of one signal from
a signal lists that have been compressed. We know that considered
signal is combination of a signal sets. So, to restore this signal we
need to calculate the function link this signal to all other signals. In-
deed the complexity of the restoration depends on the calculation of
17
these signals. This can be so complex so possible. To reduce this
complexity, we need to reduce the complexities of the calculation of
all signal that are dependent of each other. This means that the
restoration depends on the operation number needed to calculate all
required signals.
S
n
= F
n
(S
0
, , S
n1
) (47)
where
S
i
= F
i
(S
0
, , S
k
) i, k = 0 n 1 (48)
So, to calculate F
n
() we need rst to calculate F
i
(). So, the com-
plexity of the function F
n
(), will be calculated based on the function
complexities F
i
():
C
n
=
n
i=0
w
i
C
i
(49)
where w
i
represents the weight of the operation belong to signal S
i
This problem can be transformed to an optimization problem, which
is
min
n
(
n
i=0
w
i
C
i
) (50)
4.2.3 Data base optimization
In this section we try to give an overview about the case where the
same signal depends on itself for each clock cycle and/or on signal,
which depends on clock cycle. Although the simplicity of the sim-
ulation of the relationship between considered signal and the other
associated signal, we have to solve the problem of storage space
optimization.
Let look at the example of gure ??. We see that the signal a de-
pends on the clock signal. It changes value for each clock signal
cycle.
In the example of gure ??, we see that the signal a will change
value each cock signal cycle. So, if we like to store the signal a,
we will need more storage space. Mathematically, we can write the
following relation between the clock signal clk and the signal a
a
(n)
= a
(n1)
XOR 1 (51)
where n is the nth clock signal cycle.
18
P: process
begin
if clkrising_edge then
a <= a xor 1;
end if;
end process;
Figure 14: VHD model 5
Notice that this equation allows us to restore the signal a at the clock
signal cycle n base on the value of this signal at the clock signal
cycle (n1). So, we can just store the value of this signal at the rst
clock signal cycle and then we can calculate the value of considered
signal recursively. Although the problem is that this calculation of the
value of signal a at the clock signal cycle n is so complex and need
a great schedule time.
To solve this compromise between these problems, we need to nd a
solution that reduces the schedule time needed for the restore of the
signal a at the clock signal cycle n, when n is so great so possible
and the storage space needed for the storage of signal a each clock
signal cycle.
Problem formulation: The problem we are going to consider may
be formulated as follows:
n IN : S
(n)
= f
n
(S
(n1)
) (52)
nd
n
opt
=
_
min
n
((n 1) SS(a))
min
n
((n 1) C
0
)
(53)
where SS() is the storage space needed to store a signal value.
This can be transformed as follows:
p IN : n = ip where 1 i p (54)
such as
n, p IN : S
(n)
= f
np
(S
(p)
) (55)
For each clock signal cycle n, we will try to nd an integer p, that
allows to store the signal value S
(ip)
that minimize the time schedule
19
Figure 15: Software Design
needed for the restoration of signal value S
(n)
. Arbitrary, we can
choose p as follows:
p IN : p 1
n
10
p i = 1 9 (56)
Schedule time analyses: In this paragraph, we prove our choice
of the value of p. Let consider the case, when all values of the signal
a will be stored. So, we need (n 1) size(a) storage space to
store all values of signal a. However, if we store just the value of a
at the clock signal cycle 0, we will need (n 1) C
0
schedule time
to restore the value of signal a at the clock signal cycle n. Indeed,
if we apply the developed algorithm, we need just p C
0
n
10
C
0
schedule time to restore the a signal value at the clock signal cycle
n and just 9 size(a) storage space. So, we have to reduce the
number of schedule time of 10 and the number of storage space of
10 too.
4.3 Waveform Compression Software Design
In this section an overview of the developed software design is pre-
sented. Figure ?? represents the general aspect of this software.
FreeHDL Simulator: its task is to simulate a given VHDL design
in order to create the VCD and DDB les for further use.
CDFG Simulator: the CDFG simulator allows to create the
Control Data Flow Graph of considered VHDL model and write
it in a specic le.
Waveform Compressor: its task is the compression of the VCD
le based on the CDFG of considered model. The obtained le
will be stored for further use.
Waveform Decompressor: realizes the decompression of com-
pressed VCD le to view it.
Waveform viewer: allows the viewing of the decompressed
VCD le in order to get out the verication results of consid-
ered VDHL design.
Figure ?? represents the software module designs in detail.
Waveform Compressor: the waveform Compressor is com-
posed of two basic modules:
20
WaveformCompression module: this module interests the
compression techniques. Three compression modules
are employed:
Header le compression module: as it is described
above, the compression of the VCD header le is
done independently of the Control Data Flow Graph.
Simulation Time Compression module: this module
interests the compression of the signal ids through-
out the knowledge of considered clock cycle and sig-
nal events. It returns the signal ids, which must be
compressed and stored.
Parameter value compression module: this module
realizes the compression of considered parameter val-
ues inside the given VCD le based on the informa-
tion returned from Dependency Scheduler and VCD
le. It returns the signal values, which must be com-
pressed and stored.
Dependency Scheduler module: this module treats the
dependency of signals and processes based on the de-
veloped CDFG of the VHDL design. This module has to
control whether a given signal Id and value will be com-
pressed and stored or not. So, it controls the Parameter
Value Compression and the Simulation Time Compres-
sion modules. Thereby, it gives the needed information to
the Simulation Time module about which signal Ids that
must be compressed and these to the Parameter Value
Compression about which signal and variable values must
be compressed and stored. On the other hand, it returns
the dependency between signals and variables in math-
ematic forms to the Waveform Compression module to
be stored. Its input comes from the CDFG Scanner and
Parser.
CDFG Scanner and Parser module: this module translates the
CDFG, which is written in a given le to a data base inside the
memory for further need. Its inputs are coming from the DDB
and CDFG les.
Viewer: the viewer has to get out the verication results of con-
sidered VHDL model. Basically it consists of two modules :
Waveform Decompressor: it is composed of four mod-
ules, which are needed for the decompression of com-
pressed VCD le.
21
Figure 16: Software Module Designs
Header File Decompression module: this module is
used to decompress the VCD header le.
Simulation Time Decompression module: this mod-
ule allows the decompression of the Simulation Time
based on the information returned fromthe compressed
VCD le and the Dependency Simulator.
Parameter Value Decompression module: this mod-
ule decompresses all parameter values based on the
information stored in the compressed VCD le and
those returned from the Dependency simulator.
Dependency Simulator module: this module allows to
simulate the stored dependencies between parame-
ters and returns either the parameter value or the the
Simulation Time.
waveform viewer: simulates decompressed VCD le to
get out the Verication results of VHDL model.
5 Time Schedule
In this section we presents the needed time to achieve each module
of this software.
VCD header le compression: already done.
Simulation time compression: one month.
Parameter value compression: one month.
Dependency scheduler: two months.
CDFG scanner and parser: already done.
VCD header le decompression: one month.
Simulation time decompression: two months.
Dependency simulator: three months.
Parameter value decompression: two months.
Waveform viewer: using Dinotrace. Already done.
22
6 Conclusion
In this document, we presented an idea of using CDFG to improve
the developed waveform compression technique. Two cases for us-
ing CDFG in the waveform compression are developed: simple case,
which interests the signal and process dependencies, and complex
case that interests the other VHDL statements. We can conclude
that the CDFG advantages are to allow a rapid interpretation of the
information inside a VHDL model throughout a graphical represen-
tation.
7 Reference
[1] E. Naroska, A Novel Approach for Digital Waveform Compres-
sion.
[2] E. Naroska, Waveform Compression Technique.
[3] E. Naroska and S. Mchaalia, Control Data Flow Graph for Free-
HDL compiler.
[4] J. Ziv and A. Lempel,A Universal Algorithm for Sequential Data
Compression, IEEE Transaction on Information Theory, Vol. IT-23,
No.03, May 1997.
[5] The multivariate Normal Distribution, basic course.
List of Figures
Figure ?? : VCD File.
Figure ?? : VHDL Model 1.
Figure ?? : CDFG of VHDL Model 1.
Figure ?? : VHDL Model 2.
Figure ??: VHDL Signal Assignment procedure.
Figure ?? : CDFG of VHDL Model 2.
Figure ?? : VHDL Model 3.
Figure ?? : CDFG of VHDL Model3.
Figure ?? : VHDL Model 4.
Figure ?? : CDFG of VHDL Model 4.
Figure ?? : Software Design.
Figure ?? : Software Module Designs.
23