Beruflich Dokumente
Kultur Dokumente
Abstract
This paper deals with a top down design methodology of an artificial neural network
(ANN) based upon parametric VHDL description of the network. To come off early in the
design process a high regular architecture was achieved. Then, the VHDL parametric
description of the network was realized. The description has the advantage of being
generic, flexible and can be easily changed at the user demand. To validate our approach,
an ANN for electroc,'u'diogram (ECG) arrhythmia's classification is passed through a
synthesis tool, GALILEO, for FPGA implementation.
Key words
ANN, top down design, VHDL, parametric description, FPGA implementation.
Introduction
Engineers have long been fascinated by how efficient and how fast biological neural
networks are capable of performing complex tasks such as recognition. Such networks are
capable of recognizing inputs data from any of the five senses with the necessary
accuracy and speed to allow living creature to survive. Machines, which perform such
complex tasks, with similar accuracy and speed, were difficult to implement until the
technological advances of VLSI circuits and systems in the late 1980's [I]. Since then,
VLSI implementation of artificial neural networks (ANNs) has witnessed an exponential
growth. Today, ANNs are available as microelectronics components.
The benefit of using such implementation is well described in a paper by R. Lippman [2] :
<< The great interest of building neural networks remains in the high speed processing that
could be provided through massively parallel implementation >>. In [3], P. Trealeven and
others have also reported that the important design issues of VLSI ANNs are parallelism,
performance, flexibility and their relational ship to silicon area. To cope with these
properties [3] reported that a good VLSI ANN should exhibit the following architectural
properties:
9 Design simplicity that leads to ,architecture based on copies of a few simple cells.
9 Regularity of the structure that reduces wiring
9 Expandability and design scalability that allow many identical units by packing a
number of processing units on a chip and interconnecting many chips for a complete
system.
Historically, the development of VLSI implementation of artificial neural networks has
been widely influenced by the development in technology as well as in VLSI CAD tools.
140
Xn , ~ L J l
(c)
Fig. 1. (a) Biological model neuron. (b) Artificial neuron model (c) Three layer artificial neural network
The ANN computation can be divided in two phases: learning phase and recall phase. The
learning phase performs an iterative updating of the synaptic weights based upon the error
back-propagation algorithm [2]. It teaches the ANN to produce the desired output for a set
of input patterns. The recall phase computes the activation values of the neurons from the
output layer according to the weighted values (computed in the learning phase).
Mathematically, the function of the processing elements can be expressed as:
l j = )-~ ,wiji ~ail - l ) + O (I)
i
w.[ is the real valued synaptic weight between element i in layer l-1 and element j in layer
U
O-l)
I. s i is the current state of element I in layer I-I. 0 is the bias value. The current state
142
It must be mentioned that our aim is to implement the recall phase of a neural network,
which has been previously trained on a standard digital computer where the final synaptic
weights are obtained, i.e. "off- chip training".
III. D e s i g n methodology
The proposed approach for the ANN implementation follows a top down design
methodology. As illustrated in Fig. 2, architecture is first fixed for the ANN. This phase
is followed by the VHDL description of the network at the register transfer level (RTL)
[8], [13], Then this VHDL code is passed through a synthesis tool which performs logic
synthesis and optimization according to the target technology. The result is a netlist ready
for place and root using an automatic FPGA place and root tool. At this level verification
is required before final FPGA implementation.
In the following sections the digital architecture of the ANN will be derived then the
proposed parametric VHDL description. Synthesis results, placement and rooting will be
discussed through an application.
143
In Fig. 5(a) the VHDL description of the neuron is illustrated. Fig. 5(b) illustrates the
layer description. Fig. 5(c) illustrates the Network description.
First, a VHDL description of the MAC circuit, the ROM and the LUT memories was
done. In other to achieve flexibility, the word size width (nb_bits) and the memories
depth (nb_addr and n b a d d ) are kept as generic parameters (Fig. 5(a)).
Second, a VHDL description of the neuron was achieved. The parameters that introduce
the flexibility of the neuron are the word size (rib_bit) and the component instantiation. A
designer can change the performances of the neuron by choosing other pre-described
components stocked in a library without changing the VHDL code description of the
neuron (Fig. (5b)).
145
Third, a layer is described. The parameters that introduce the design flexibility and
genericity of the layer are the word size ( n b b i t s ) and the number of neuron (nb_neuron).
The designer can easily modify the number of neurons in a layer only by easy
modifications of the layer VHDL code description (Fig.5. b.).
Finally, a VHDL description of the network is achieved. The parameters that introduce
the flexibility of the network are the neurons word sizes (n), the number of neurons in
each layer (nb_neuron) and component instantiation of each layer (component layerS,
component layer3 and component layer2). The designer can easily modify the size of the
network simply by giving small changes in the layers descriptions. The designer can also
change the performances of the network only by using others pre-designed layers (Fig
4.c.).
entity neuron is Entity nelwock is
generic(nb_bits :integer) ; -- word size generic (n, nl, nO: integer) ;
port (in nenr :in unsigned(nb_bits- I downto O) ; pelt (X I,X2,X3,X4,X5:in sl.d_logic_vector (n downto 0);
out_neur : out std_logic_v~tor((nb_bits - L) downlo O) ; ad:in unsigned(nl dowmo 0);
rend_en,rst,clk,rcady : in std_logic) ; adl:in unsigned (nl dew, ate 0);
end neuron ; ad2:in unsigned(hi downto 0):
,lrchileclnrc nenron_dc,~tiplion of neuron is elk ,rst.rendyl.rend en : in ~d logic;
zompoucm M A C c 132,e232:oet std_logic_vector(((2 *n+ 1) downto 0)) ;
generic (nb_bits : integer) ; end network ;
port (x, w : i n std_logic_vcctor((nb_bits-I) downto 0) ;
elk. rsl : in sld_logic ; architecture network_desctiplion of nctwoA is
q : out std_logic_vector ((2*nb bits) -I) downto 0)) ; component layer I
gild cotnponen[ ; generic (nb_neeron : integex ; nl : integer) ;
component ROM port(XI,X2,X3.X4,X5:in std_h~ic_vcctor (nl downlo 0);
generic {nb add : integer : ni',_bits :integer) ; ad:in unsigned(nl downto0).
port ( add : in unsigl~'d ((nb_addr -1) downto 0) ; s l:in std logic_vector(no downlo 0);
out_tom : out ~d logic vector((nb_bits - I) downto 0) ; clk,rsl.rendy,read_en : in std_logic) ;
read en : in ~d_logic) ; n 13.n23,n33,n43.n53 :out std logic_vector(((2*n l )+ I )
end colnponeflt ; dowmo 0));
[,'OIllpone n.I LUT end component ;
generic(nb_ad(h" :integer ; nb_bits :integer) ;
port (addr : in tad Iogie_vectoc((nb_bits - I) downto 0)); component layer2
out lut : out sld_logic_vector((2*nb bits -I) downto 0) ; genetic (nb_ncuron : integer ; nl : integer) ;
read en : in std_logic) ; port(X l,X2,X3,X4,X5:in s~d_togic_vector (at downto 0);
end component ; adl :in unsigned(nl dowmo 0);
begin s2:in sld_logic_vector(nO downto 0).
rein_wight : ROM generic inap (). port nrmp (read en, add, w) ; clk,rsl,ready,.rend_en : in std logic) ;
molt ace : MAC genetic map(), port map (x.w,clk.rea,q) ; n 13.n23,n33:out std Iogic_veCter(((2 *n I)+ 1) dowmo
result : LUT generic map O . port map (rend_en. q. out_lut ); 0));
end neuron_de~ription ; end component ;
(a)
component layer3
entity Inyer_n is genetic (nb_neuron : integer ; n I : integer;) ;
generic(nb neuron :integer ; rib_bits :imeger) ; port(X ~,X2,X3:in std_loglc_vector {n ~ downlo 0);
p~t(inpot_layer I : unsigned ((nb_bits -I) downto 0); nd2;in unsigned(nl dowmo 0);
inl~tt Inyer2 : in ~td Iogic_vector((nh hits) downlo 0); s3:in md logic vector(no (k)wnlo 0);
elk, rst, ready.rcad_enl : :in ~ d [t~ic ; clk,r~,ready,rend cn : in ~d logic) :
output_layer I ..... output layer n : out n 132,n232 :out sl d_logic_vector(((2 *n l )+ I ) downl o 0));
~d..iogic((2*(nb_bits)+ I) downto 0)) ; end component ;
end layer_n ;
architecture layer_description of layer n is ~r
component neuron layer 5 : laycri genetic n'mpO, port mal~sl, X[,X2,X3,X4,X5,
port (in_neur :std_iogic_vector(nb bits- I downto 0) ; rs~,clk, ready.read_cn,ad,n 13.n23,n33,n43,n53) ;
out_neur : out sld_loglc_vcctor(nb_bits - I dowmo O) ;
read_en.rst.clk.ready : in std_logic) ; layer 3:layer2 generic innpO, Portrnap(s2.X I,X2,X3,X4,X4,
end eounponent ; elk jsl,ready,read en,ad I .n ! 3,n23,n33);
begin layer_2:layer3 genetic map(), port nmp(s3,X l,X2,X3,clk,r~,
neuron_n : neuron generic map(), rendy.read_en,nd2,n 132.n232);
port map (input_laycxi ,input_layer2. elk, rst. ready, :nd ;
read_enl .output_layerl....,outpuLlayer n) ;
end iayer_descriiXion;
(b) (c)
Fig. 5. Parametric VHDL description. (a): Neuron description. (b): Layer description. (c): Network
description.
146
NS
~swr
PR
PP
ECG ~iB.nl
Fig. 7. (a): ANN input- output connections. (b): Functional simulation results of the (5-3-2) ANN.
Fig. 8. Galileo Synthesisresults. Fig.9. Top view of the ANN FPGA structure.
148
References
[1] M. I. Elmasry, <<VLSI Artificial Neural Networks Engineering >~,Kluwer Academic
Publshers.
[2] Richard P. Lippmann, ~An Introduction to computing with Neural Nets ~, IEEE
ASSP Magazine, pp. 4 -22. April 1987.
[3] Philip Trealeven, Macro Pacheco and Marley Vellasco, ~ VLSI Architectures for
Neural Networks ~, IEEE MICRO, pp. 8-27, December 1989.
[4] Y. Arima, K. Mashiko, K. Okada, ~A Self- Learning Neural Network Chip with 125
Neurons and 10K Self-Ornization Synapses ~, Symposium on VLSI Circuits, pp. 63-
64, 1990 IEEE.
[5] H. Ossoing, ~Design and FPGA- Implementation of Neural Networked, ICSPAT'96,
Pp. 939-943.
[6] Charles E. Cox and W. Ekkehard Blanz, ~ GANGLION- A Fast Field Programmable
Gate Array Implementation of a Connectionist Classifier ~, IEEE JSSC, Vol. 27, No.
3, pp. 288- 299, March 1992.
[7] R. Airiau, J. M. Bcrge, V. Olive, J. Rouillard " VHDL du language a la
modelisation",
Presses Polytechniques et Universitaires Romandes et CNEST- ENST.
[8] R. Airiau, J. M. Berge, V. Olive, "Circuit Synthesis with VHDL", Kluwer Academic
Publishers.
[9] Daniel Gajski, Nikil Dutt, Allen Wu, Steve Lin, "High level Synthesis- Introduction
to
Chip and System Design", Kluwer Academic Publishers.
[10] XACT user manual.
[11] M. S. Ben Romdhane, V. K. Madissetti and J. W. Hines, " Quick-Turnaround ASIC
Design in VHDL Core- Based Behavioral Synthesis", Kluwer Academic Publishers.
[12] N. Izcboudjen and A. Farah, " A New Neural Network System for arrhythmia's
Classification," NC'98, International ICSC/IFAC Symposium on neural
Computation. Vienna, September 23-25, pp. 208-212.
[13] GALILEO HDL Synthesis Manual.