Sie sind auf Seite 1von 83

DESIGN OF LOW VOLTAGE LOW POWER

NEUROMORPHIC CIRCUITS USING


CADENCE TOOLS
A Dissertation
Submitted in the Partial Fulllment of
Requirement for the Degree of
Master of Technology
in
Electronics Engineering
By
Syed Muassir Mir Sadat Ali
(Reg.No.2010MEC018)
Under the guidance of
Dr. S. S. Gajre
Department of Electronics and Communication Engineering
Shri Guru Gobind Singhji Institute of Engineering
& Technology, Nanded (M.S), INDIA.
July-2012
Dedicated to
My
Father and Mother
- Who are the inspiration and pillars behind success of this work
ii
iii
Declaration
I hereby declare that, the dissertation entitled DESIGN OF LOW VOLT-
AGE LOW POWER NEUROMORPHIC CIRCUITS USING CADENCE
TOOLS, which is being submitted for the award of Master Of Technology in
Electronics Engineering to the Swami Ramanand Teerth Marathwada University,
Nanded is my original and independent work.
Syed Muassir Mir Sadat Ali
Reg.No.: 2010MEC018
Department of Electronics & Telecomm.
S.G.G.S. I.E & T, Nanded
iv
Acknowledgement
I would rst like to thank my guide Dr.S. S. Gajre for his continuous support and
guidance throughout this dissertation,without his eorts this would not be where it is
today.I beneted from his advice, encouragement, innovative ideas & inspiration. It is
he who has ignited the re within me for the MOS device and VLSI technology.Words
fall short for Dr.S. S. Gajre to thank him for making me what I am today both
technically and personally.
I take this opportunity to thank our Ex-Director Late Dr.S. R. Kajale, Dr.R. R.
Manthalkar & Dr.S. S. Gajre for making us available & setting up the CADENCE
DESIGN CENTRE (under the MODROB scheme funded by AICTE to upgrade the
VLSI lab) at institute by procuring the VLSI industry standard CADENCE Software.
I would also like to thank Dr. A. V. Nandedkar for introducing me to the Articial
Neural Networks.
I had many useful technical discussions with my friends Rohit.S of Qualcomm,
Shoeb M.Khan of Hyundai Ltd and Shadab Khan of IBM Ltd. Thank you friends. I
asked really some weird, technically simple as well as knotty questions in the edaboard
forum. All were answered diligently and patiently by Mr.Erik Lindner (formerly with
ATMEL).Thanks a lot Erik Sir. Similar thank goes to Mr.Keith Raper of Key Design
Electronics Ltd,UK for answering me in edaboard forum.
I am thankful to Nakul, Anuj, Bharat & Srikant for their support and valuable
discussions for the implementation of this project work.My other M.Tech colleagues
have created a friendly atmosphere and helped me directly or indirectly in many
ways.Thank Guys !! None of this would have been possible without the constant
support of my family i.e. my Father M.S.Ali & Mother Farzana, my sister S. Shoyeba,
my brothers:Dr.S.Muddassir, Dr.S.Mukkassir, Quazi Mushtaquddin, Er.S.Mubbassir,
Er.S.Munashir. I thank all of them for moral support. I am indebted and thankful
to my Sister for her special gift that I received with lots of love which helped me a
lot in this project.
Date: July 22
nd
, 2012
Place : Nanded,Maharashtra,India.
Syed Muassir M. S. Ali
v
Abstract
We all need a system that will learn itself and behave accordingly. This is
what the motivation behind this project. In this dissertation work we have designed
the basic building blocks required for the Neuromorphic Systems; the systems that
emulate and mimic the behavior of the biological neural systems which may in turn
someday bring a new series of Neurocomputers; the computers that dont follow a set
of instructions, rather they learn over a period of time similar to the human neural
system in short like the Human Brain. These systems will need parallel processing of
signals and should work at low power.
In this dissertation work, we have designed a circuit for the Neuron which works
at very low voltage and low power. It requires less numbers of transistors which
work in the subthreshold mode of transistors and uses the translinear principle. The
circuit for the neuron had been realized using Four Quadrant Multiplier for Synapse
and Dierential Transconductance Amplier for the Activation Function stage of the
Neuron.Another most important building block i.e. Winner Take ALL circuit has
been designed in subthreshold MOS and uses the tarnslinear principle. This is the
novel implementation in the 180nm technology for the low voltage in subthreshold
mode. This circuit also requires less number of transistors, works at very low voltage
of 0.7V and consumes less power. All these circuits had been designed in 180nm
process technology in CADENCE software using spectre tool.
Contents
List of Figures x
List of Tables xii
1 Introduction to Neuromorphic Engineering 1
1.1 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Biological Neural Networks . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Articial Neural Networks . . . . . . . . . . . . . . . . . . . . 5
1.2 Development History . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 ANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Neuromorphics . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Hardware Implementation of Neural Networks
(Neuromorphic Circuits) . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Biology and Silicon Devices . . . . . . . . . . . . . . . . . . . 9
1.3.2 Digital Vs Analog VLSI . . . . . . . . . . . . . . . . . . . . . 10
1.3.3 Analog VLSI for Neuromorphic Circuits . . . . . . . . . . . . 11
1.4 Summary of the Dissertation:Chapter Outline . . . . . . . . . . . . . 12
2 Low Voltage Low Power Circuit Design 13
2.1 The General Framework . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Why analog? Why digital? . . . . . . . . . . . . . . . . . . . . 13
2.1.2 Why Low Voltage? . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 Why Low Power? . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.4 Why CMOS? . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Techniques to reduce the Power Consumption and the Voltage Supply 15
vii
2.2.1 Techniques for Voltage Reduction . . . . . . . . . . . . . . . . 15
2.2.2 Techniques for Current Reduction . . . . . . . . . . . . . . . . 17
2.3 The MOS Transistor in Weak Inversion . . . . . . . . . . . . . . . . . 17
2.4 The Current Conveyor . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Translinear Circuits in Subthreshold MOS 24
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 The Translinear Elements . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 The Translinear Principle . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1 Translinear Loops of Ideal TE . . . . . . . . . . . . . . . . . . 27
3.3.2 Translinear Loops in Subthreshold MOS Transistors. . . . . . 30
3.4 Examples of Translinear Ciruits . . . . . . . . . . . . . . . . . . . . . 32
4 Neuron Circuit Design 34
4.1 Some Biology ! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.1 Overview of Neuron: . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.2 Anatomy of Neuron . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.3 Synapses for Connectivity . . . . . . . . . . . . . . . . . . . . 38
4.1.4 Mechanisms for Propagating Action Potentials . . . . . . . . . 39
4.2 Neuron Circuit Design . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2.1 Multiplier(Synapse) Circuit Design . . . . . . . . . . . . . . . 42
4.2.2 Implementation & Simulations of Synapse Design . . . . . . . 46
4.2.3 Activation Function Circuit Design . . . . . . . . . . . . . . . 50
4.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5 Design of WTA Circuit 53
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Current Mode WTA Circuits . . . . . . . . . . . . . . . . . . . . . . . 55
5.2.1 Lazzaros WTA Circuit Principle . . . . . . . . . . . . . . . . 55
5.2.2 Novel Implementation of CM WTA . . . . . . . . . . . . . . . 57
5.2.3 Simulation Results of WTA circuit . . . . . . . . . . . . . . . 59
viii
6 Conclusions and Future Work 63
6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.2.1 Array/Layers of Neurons . . . . . . . . . . . . . . . . . . . . . 64
6.2.2 Emulating Human Vision . . . . . . . . . . . . . . . . . . . . 64
6.2.3 Layout of the designed circuits . . . . . . . . . . . . . . . . . . 64
Bibliography 65
ix
List of Figures
1.1 A drawing of major structures of the Brain . . . . . . . . . . . . . . . . 3
1.2 Block Diagram of a Neural System . . . . . . . . . . . . . . . . . . . . . 3
1.3 Comparison of Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 Gate Voltage Vs Charge . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Representation of Capacitances . . . . . . . . . . . . . . . . . . . . . . 18
2.3 V
DS
Vs I
D
characteristics for Subthreshold MOS . . . . . . . . . . . . . 20
2.4 V
GS
Vs I
D
characteristics for Subthreshold MOS . . . . . . . . . . . . . 21
2.5 Current Conveyor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 Current Conveyor Implementations (a) Single MOS transistor Current Con-
veyor. (b) Two MOS transistors current controlled conveyor . . . . . . . 23
3.1 Translinear Elements.(a)Circuit symbol for ideal TE. (b)a diode (c)an npn
BJT (d)a subthreshold MOSFET . . . . . . . . . . . . . . . . . . . . . 26
3.2 A Conceptual Translinear Loop conprising of N ideal TEs . . . . . . . . 28
3.3 A Translinear Loop of Subthreshold MOS Transistors with their bulks tied
to a common substrate potential. . . . . . . . . . . . . . . . . . . . . . 30
3.4 A Translinear Circuit Topology of Subthreshold MOS Transistors.(a)Stacked
Loop (b)Alternating Loop . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5 A Translinear Circuit using Subthreshold MOS Transistors and the output
equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1 Typical Structure of Neuron . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Information Flow through Neurons: A Signal propagating down an axon to
the cell body and dendrites of the next cell. . . . . . . . . . . . . . . . . 39
4.3 (a)Neuron Model, (b) Neuron Circuit . . . . . . . . . . . . . . . . . . . 41
x
4.4 Generic(alternate) Translinear Loop . . . . . . . . . . . . . . . . . . . . 43
4.5 Four Quadrant Multiplier Circuit Topology for the Synapse implementation 44
4.6 Measured DC Transfer Characteristics when w input is used as parameter. 47
4.7 Measured DC Transfer Characteristics when x input is used as parameter. 47
4.8 Overall Circuit Implementation of the Four Quadrant Multiplier for the
Synapse Circuit Design. . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.9 Transient analysis of the input waveforms for the x and w inputs. . . . . . 49
4.10 Output current waveform in the case of waveform modulation when two
dierent sinusoids are applied to the multiplier. . . . . . . . . . . . . . . 49
4.11 MOS Dierential Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.12 MOS Dierential Transconductance Amplier for the implementation of Ac-
tivation Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.13 DC Voltage Characteristics of the Dierential Transconductance Amplier
for the Activation Function . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.1 Current Mode WTA Neural network . . . . . . . . . . . . . . . . . . . . 55
5.2 Schematic Diagram of 3 cells of the Lazzaros WTA Circuit . . . . . . . . 56
5.3 Schematic Diagram of the Current Mode WTA circuit in subthreshold mode
of operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4 Transient Response of the subthreshold WTA circuit where I
out
is following
the envelope of input currents . . . . . . . . . . . . . . . . . . . . . . . 60
5.5 DC Response of the subthreshold WTA circuit . . . . . . . . . . . . . . 61
xi
List of Tables
1.1 Digital vs Analog VLSI Technology . . . . . . . . . . . . . . . . . . . 10
4.1 Transistor sizes of the 4-Quad Translinear Multiplier(Synapse) shown
in Fig.4.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1 Dimensions of the Transistors,Supply Voltage and Currents used for
the Subthreshold Operation of the WTA circuit. . . . . . . . . . . . . 60
5.2 Performance Characteristics Comparison with other WTA circuits . . 62
xii
Chapter 1
Introduction to Neuromorphic
Engineering
Neuromorphic Engineering is an interdisciplinary approach to the design of in-
formation systems that are inspired by function, structural organization, & physical
foundations of biological neural/nervous systems. Neuromorphic Engineering is a
term coined by Carver Mead [1]. It is also called as Bio-Mimetic Engineering. Neuro-
morphic Engineering aims at systems that attain intelligent behavior through adap-
tation and learning in their interaction with the surrounding environment, & that
are more robust and orders of magnitude more energy ecient than conventional
approaches using digital electronics and user programmed intelligence systems.
What it is? It is a belief that
As engineers,we would be foolish to ignore the lessons of a billion years
of evolution - Carver Mead,1993.
An interesting comparison between a biological system and a super computer is
given in the following. A frog nds a y passing through and catches it in a moment.
This action has been created by a series of information processing carried out within
this small creature. This includes catch of the y image on the retina, identication of
the object as his meal, computation of its expected motion followed by the activation
of motor neurons to catch the y. Such a real-time action, however, is impossible
even with the most advanced supercomputers of today.So it is vey necessary to study
and understand the working of the neural network system.
1
1.1 Neural Networks
Neural Networks are a promising computational approach due to their high
capabilities in modeling and solving complex problems hardly approachable with tra-
ditional methods. Neural networks are composed of massively parallel architecture
that process the large quantity of. information in analog values, and solve varieties
of ill-dened and/or computation-intensive signal processing tasks. They can be suc-
cessfully used when:
There is no direct algorithmic solution for the chosen problem, but desired
responses for a set of examples are available; these examples can be used to
train the neural network to solve the chosen problem(e.g. hand written character
recognition).
The given problem changes over time, then the adaptability of a neural network
is exploited to adapt the problem solution whenever the problem changes(e.g.
control of dynamic and aging process).
Neural Networks is not useful when it is considered to solve the problems for
which the analytical solution can be easily found and implemented. In that case, the
corresponding neural implementation will be generally larger and less accurate than
the algorithmic solution.
It is essential to recognize that the neural systems evolved without the slightest
notion of mathematics or engineering analysis. Nature knew nothing of bits,Boolean
algebra, or linear system theory. But evolution had access to a vast array of physical
phenomena that implemented important functions. It is evident that the resulting
computational metaphor has a range of capabilities that exceeds by many orders of
magnitude the capabilities of the most powerful digital computers. The fast growing
eld of neural network research attempts to learn from the natures success and to
mimic some of the natures trick to accomplish the information processing task (not
easily performed by conventional methods).
It is the explicit mission of this dissertation to explore the view of computation
that emerges when we use this evolutionary approach in developing an integrated
semiconductor technology to implement large scale collective analog computation
2
(may be some Neuro-Computer of future.). Building blocks like Neuron and WTA
has been designed and simulated.
1.1.1 Biological Neural Networks
Biological Neural Networks provide in fact the best source of knowledge to re-
searcer for developing and implementing powerful neural systems.A drawing of the
major structures of a brain is shown in Figure.1.1, which was originally shown in [2].
A similar plot can be found in [3]. According to the DARPA Neural Network Study
published in 1988 [4], one human brain is estimated to have 100 billion neurons.
Figure 1.1: A drawing of major structures of the Brain
The Processing performed by a neural system can be distinguished in two stages:
the perception and cognition stages (see Figure.1.2).
Figure 1.2: Block Diagram of a Neural System
3
At the perception stage, the input stimulus is acquired and pre-processed (i.e. nor-
malized, adapted, etc.),& then the salient features from the stimulus are extracted.At
the cognition stage, the results of the perception stage is processed to compute more
complex information to generalize from the perceived stimulus(i.e.the relationships
underlying the features are constructed). The result of the cognition is the action.
The neural system of Fig:1.2 can be considered general purpose, i.e., dierent
tasks can be solved by the proposed architecture.
In a vision system for a robot, the perception stage can be implemented by
a contrast sensitive retina, which transduce the luminance of the input scene
into physical variables, performs a contrast adaptation and extract the salient
features from the acquired input scene (i.e. the perception stage is implemented
by an early vision system). The cognition stage processes these features and
compute the information needed to drive the robot inside a unknown ambience
(e.g., edges, textures, etc.).
In a character recognition system, the perception stage implements the character
acquisition, segmentation, and normalization of the string of input characters;
then it extracts a set of features that describe the characteristics of the input
character (e.g., character shape, orientation, etc.). The cognition stage pro-
cesses these features and implements the classication task, i.e., give the results
of the recognition.
Human Brain Vs Computer
Now lets compare the Human Brain with the INTELs Core 2 Duo Processor
Core 2 Duo
1. Requires 65 Watts of Power
2. Has 291 million Transistors
3. Consumes > 200nW/Transistor
4
Human Brain
1. Requires 10 Watts of Power
2. Has 100 billion Neurons
3. Consumes 100pW/Neuron
The dierences speak for themselves.What we need is the blocks that consumes
less power, requires less area to accomodate huge number of neurons.This is actually
is what the goal of this dissertation.
1.1.2 Articial Neural Networks
Articial Neural Networks(ANN) are composed of a large number of compu-
tational nodes operating in parallel [5], [6]. Computational nodes, called neurons,
consists in processing elements with a certain number of inputs and a single out-
put that branches into collateral connections, leading to the input of other neurons.
Normally they perform a nonlinear function on the sum (or collection) of their input.
The neurons are highly interconnected via weight strengths [5], [6]: these inter-
connections are typically called synapses and control the inuence of neurons on the
others neurons. The synaptic processing is typically modeled as multiplication or
Euclidean distance between a neuron outputs and synaptic weight strengths. Each
neurons output level depends therefore on the outputs of the connected neurons and
on the synaptic weight strengths of the connections.
The neuron topology and the weight strengths determine the ANNs behavior: the
weight strengths can be adapted in a training phase by using a set of examples and a
learning rule [5], [6]. All the weights are then adjusted to learn an underlying relation
in the training examples. It is worth noting that this type of nding the function to be
performed, by a neural processing system is completely dierent from programming
a function on a typical processing systems (e.g., a personal computer).
The training phase can be performed without supervision or with supervision [7].
5
In neural networks trained without supervision, no information concerning the
correct network output (i.e., the desired output) is provided during training:
such networks learns (i.e., adapts the weight strengths) to classify the data into
sets based on the intrinsic statistical information contained in the input data.
Examples of such networks, usually use quantizers or form clusters, i.e. the
Kohonens nets and the Carpenter/Grossberg classier [6].
In neural networks trained with supervision, the adaptation of the weight strengths
occurs according to the information, called target, that specify the desired net-
work output for each input pattern of the training set. Such kinds of network
are general used as associative memory or as classiers. Examples of such kind
of networks are the Hopeld nets, the Gaussian classier, and the Multi Layer
Perceptron (MLP) [6].
1.2 Development History
The development History is divided into the Articial Neural Network : The
development of the study and understanding of the Neural networks is provided; and
development history of Neuromorphics: the history of mimicking the biology into
silicon is provided here.
1.2.1 ANN
The modern era of neural networks is believed to begin with the pioneering
work of McCulloch and Pitts in 1943 [8] which described a logical calculus of neural
networks. The rst formal model of an elementary computing neuron was outlined.
In 1949, Hebb published a book [9] which contained an explicit statement of the
Hebbian learning rule for synaptic modication. In 1954, Minsky wrote a doctorate
dissertation entitled: Theory of Neural-Analog Reinforcement Systems and Its Ap-
plication to the Brain-Model Problem, at Princeton University. He built and tested
the rst neurocomputer. In 1958, Rosenblatt introduced the perceptron architecture
for pattern-recognition problems. In 1960, Widrow and Ho introduced the least-
mean-square (LMS) algorithm for the training of adaptive linear element (Adaline)
6
and the multiple adaline (Madaline) networks [10], [11]. In 1969, Minsky and Pa-
pert published a book [12] with detailed mathematics to illustrate the fundamental
limits of one-layer perceptrons in pattern recognition. In 1972, essential develop-
ment on associative memory was performed by Anderson, Kohonen, and Makano,
independently. In 1976, Willshaw and Von Der Malsburg published a paper on self
organizing maps, based on the ordered maps in the brain. In 1982, Grossberg and
Carpenter introduced the adaptive resonance theory (ART) networks [13]. In 1982,
Hopeld published a paper describing the Hopeld networks [14] with the use of en-
ergy function to understand network dynamics. The Hopeld networks are similar
to the model developed by Amari in 1972. In the same year, Kohonen published his
work on self organizing maps. In 1985, Ackley, Hinton, and Sejnowski, developed the
Boltzmann learning rule [15]. In 1986, Rumelhart, Hinton, and Williams rediscovered
the back-propagation algorithm [5], which was independently developed by Werbos
in his Ph.D. dissertation at Harvard University in 1974 [16]. Further development
in the 90s and 00s is the hybrid of these all. Most recent are the spiking Neuron
Models of Izhikevich Model by Izhikevich [17], [18] in 2007.Other are of Mihalas-
Nieubur Model [19] in 2009, integrate and re neuron models and resonance and re
neuron model. Recently Izhikevich has modeled the thalamus of the human brain
and simulated successfully [18]. Other note making developments are being done at
INI,Switzerland.
1.2.2 Neuromorphics
Intensive activities on VLSI implementation of neural network modules and sys-
tems began in mid-1980s. In 1989, Carver Mead published the book, Analog VLSI
and Neural Systems, which includes chapters on silicon retina and silicon cochlea
chips [1]. In the same year, Carver Mead and Mohammed Ismail edited a book
which contains various silicon implementation examples [20].Ulich Ramacher and
Ulich Ruckert edited a book on digital silicon chips [21].Since the technology was not
in deep submicron during 90s; the neuromorphic development was stagnated. During
00s there have been signicant development in this eld specially at INI,Switzerland
and at CALTECH,USA. Important are the works of the Inidiveri at INI,Swiss [22] in
7
2003, P.Livi et al developed address event neuromorphic systems in 2009 [23]. Dudek
et al developed cortical neuron in silicon [24] in the year 2008. In 2009 Folowosele
worked on switched capacitor implementation of neurons. [25].Most recent are the sil-
icon models of Izhikevich Neuron Models [26], [27] in 2010 &2011 respectively. Syed
Muassir and S.S.Gajre have implemented Simple Neuron Circuit and Analog to Dig-
ital Converter(ADC) using only adjustable threshold Inverters based on the Hopeld
Networks in 2012. The Number of Transistors required for the ADC has drastically
reduced due to application of Neural Network in ADC [28].
1.3 Hardware Implementation of Neural Networks
(Neuromorphic Circuits)
The role of microelectronics and biological systems is described by Carver Mead
in [1]:
I have condence that the powerful organizing principles found in the ner-
vous systems can be realized in silicon integrated circuits technology......
The ecient mapping of a neural system onto its implementation medium
is the essence of the design problem. ..... The silicon integrated circuits
technology provides an ideal synthetic medium in which neuro-biologists
can model organizational principles found in various biological systems.
It has been demonstrated that one human brain has about 100 billion neurons
and about 10
16
interconnections: the processing speed is about 10
14
interconnections
per second. Todays microelectronic technology is still not yet capable to implement
such a complex biological neural model; nevertheless, it is possible to implement the
neural model of simple animals.
Two points can be considered when attempting to implement VLSI neural net-
works:
1. Take inspiration from the biological systems and borrow the principle, the struc-
ture, and the functions, to build systems designed for practical applications.
8
2. Take inspiration from the biological signal processing for the VLSI signal pro-
cessing.
Consequently, the keywords for designing VLSI Neural Systems are:
Massive Parallel Systems:The neural system is composed of a huge number of
processing elements (called neurons) which are highly interconnected and works
in parallel.
Collective Computation:The neurons information processing is not locally per-
formed, rather information is distributed over the whole neural system in the
processing elements, performing together the computation required.
Adaptation:The neural system adapt itself to the processing according the evo-
lution of stimulus.
Exploitation of all Properties of Structures:Hardware neural systems must have
low power consumption, small area, and they do not need large S/N ratio or
precision to accomplish the neural information processing.
1.3.1 Biology and Silicon Devices
We need to exploit the physics of Silicon to reproduce the bio-physics of neu-
ral systems.One good example to show the similarity of biological channels and p-n
junctions:
Drift and Diusion equations form a built-in barrier(V
bi
versus Nernst Potential)
Exponential Distribution of Particles(Ions in biology and electron/holes in sili-
con)
Both biological channels and transistors have gating mechanism that mod-
ulates a channel.
Figure:1.3 compares the scales of biological structures and silicon devices.From
the gure molecules are analogous to silicon and so on and Neurons are analogous to
multipliers.
9
Figure 1.3: Comparison of Scales
1.3.2 Digital Vs Analog VLSI
Table 1.1 compares the digital and analog VLSI technology for the Neuromor-
phic circuits.
Table 1.1: Digital vs Analog VLSI Technology
Description Digital VLSI Analog VLSI
Signal Representation Numbers Physical Values(e.g. V,I,Q etc)
Time Sampling Continuous
Amplitude Quantization Continuous
Signal Regeneration Along Path Degradation
Cost Cheap and easy Expensive
Area per function Large Dense
Transistor Mode of Operation Switch All Modes
Architecture Sequential Parallel/Collective
Design and Test Easy Dicult
The main characteristics of digital VLSI technology in regard to the signal re-
generation, the precision (i.e., number of bits) of the computation are all due to
the latest development in digital CAD tools, the simplicity in designing and testing.
Massively parallel systems are very dicult to integrate on a single chip in digital
10
technology. Even sub-micron technologies are now available, only few neurons can be
integrated on the same chip, while neural systems needs thousands on neurons work-
ing in parallel. Moreover, the large number of interconnections between neurons is
one of the main factor in wasting silicon area in digital implementation of neural sys-
tems. Digital implementation are thus suited for processing in which it is important
for information to get precise restitution(e.g. DSP for multimedia applications).
1.3.3 Analog VLSI for Neuromorphic Circuits
Analog VLSI technology looks attractive for the ecient implementation of
articial neural systems for the following reasons.
Parallelism: Massively parallel neural systems are eciently implemented in
analog VLSI technology allowing high processing speed. The neural processing
elements are smaller than their digital equivalent, so it is possible to integrate
on the same chip a large number (i.e., thousands) of interconnections (i.e.,
synapses).
Fault tolerance: To ensure fault tolerance to the hardware level it is necessary
to introduce redundant hardware and in analog VLSI technology, the cost of
additional nodes is relatively low.
Low Power: The use of subthreshold MOS transistors reduce the synaptic and
neuron power consumption, thus oering the possibility of low power neural
systems.
Real world Interface:Analog neural networks eliminate the need for A/D and
D/A converters and can be directly interfaced to sensors and actuators. This
advantage is evident, when the data given to the neural network is massive and
parallel.
Low Values of S/N Ratio: This corresponds to a low precision (i.e., number
of bits) in performing the computation. This is not a problem since in neural
system, the overall precision in the computation is determined not by the single
11
computational nodes, but by the number of nodes and interconnections between
nodes [29].
Thus Analog VLSI is best suited for the implementation of the Neuromorphic /
Biomimetic Circuits.
1.4 Summary of the Dissertation:Chapter Outline
The goal of this work is the study and implementation of the Neuron and
Winner Take all Circuit which are the building blocks of any Neuromorphic System.
CMOS technology has been used to implement these circuits since it is the dominant
industry driver due to the low cost and ease of implementation.Analog VLSI is pur-
sued since it has been demonstrated that it is best suited for the implementation of
the Neuromorphic circuits.
In Chapter 2, Techniques available for the Low Power Low Voltage Designs are
listed. In addition sub threshold operation of the MOS has been explained in detail;
since in subthreshold region low voltage and low power can be achieved easily. In
Chapter 3, Translinear principle is presented and applied to the subthreshold MOS.
Some good examples, on how to apply this principle for simplicity in design using
subthreshold MOS has been dealt diligently. In Chapter 4, The working of the neuron
is explained and the simulations of the design of neuron is presented. For the design of
neuron, 4 -quad multiplier was designed and simulated and also dierential amplier
was designed for the activation function of the neuron output stage. In the Chapter
5,the WTA circuit designed using translinear principle and simulated is explained
in detail. It is a novel implementation with good performance parameters when
compared with other in literature. Finally conclusions and future work are presented.
12
Chapter 2
Low Voltage Low Power Circuit
Design
2.1 The General Framework
2.1.1 Why analog? Why digital?
In recent years, digital signal processing has progressively supplanted analog
signal processing in chip design. This is due to it having lower development costs,
better precision performance and dynamic range, as well as being easier to test. The
role of analog circuits has been mostly restricted to electronic applications of interfac-
ing digital systems to the external world. Nevertheless, when precise computation of
numbers is not required (as is the case in systems designed for perception of a contin-
uously changing environment), and massively parallel collective processing of signals
is needed, low precision analog VLSI (very large scale integration) has proven to be
more convenient than digital in terms of cost, size and/or power consumption [29] .
In recent years mixed signal application-specic integrated circuits (ASICs) have
become increasingly popular. The cooperative coexistence of analog and digital cir-
cuits is very benecial since they compensate for each others weaknesses. Hence,
although in many aspects digital electronics is superior, in reality it requires a sym-
biotic relationship with analog.
13
2.1.2 Why Low Voltage?
Since the invention of the transistor more than 50 years ago, the progress of
microelectronics can be summarised as follows: 15 per cent decrease in feature size
per year, 30 per cent cost decrease per year, 50 per cent performance improvement and
15 per cent semiconductor market growth rate. The numbers speak for themselves.
This exponential evolution made many experts in the 1990s assert that fundamental
limits were about to be reached. Fortunately, technical innovations made it possible to
shrink the technologies to smaller dimensions than the predicted 0.3m. However, as
the dimensions of the devices reduce, a new constraint arises: the interconnect delays
and the fact that they directly aect the CV
2
power dissipation. In the past, this was
not a problem as the capacitances were scaled down together with the dimensions.
Recently this scaling relationship has been replaced to being proportional to the total
length of wires, L, in the circuit. The interconnect power dissipation can therefore be
rewritten as kV
2
L (where k is the dielectric permittivity). Hence, the most signicant
parameter in the reduction of the interconnect power is the voltage and new strategies
are required to operate circuits at lower power supply voltages [30].
However, this is not the only motivation fueling the eagerness of researchers to
operate circuits at lower voltages. The other one is related to the magnitude of the
electric elds in the devices. These grow proportionally as the dimensions are scaled
down, which increases the risk of dielectric breakdown. This can additionally be
compensated for by reducing the voltage dierences across the devices. Hence a low
voltage power supply is benecial.
2.1.3 Why Low Power?
The fast development of electronic-based entertainment, computing and com-
munication tools, especially portable ones, has provided a strong technology drive
for microelectronics during the last ten years. System portability usually requires
battery supply and therefore weight/energy storage considerations. Unfortunately,
battery technologies do not evolve as fast as the applications demand. Therefore the
challenge, derived from market requirements, is to reduce the power consumption of
the circuits.
14
In addition to consumer products, battery lifetime is a crucial factor in some
biomedical products which have to be either worn or implanted within the patients
for a long period of time; such systems are continuously increasing in number and in
scope. Investigation into low power biomedical systems is another interesting quest
for microelectronic designers [31], [32].
2.1.4 Why CMOS?
The choice of fully integrated VLSI complementary metal oxide semiconductor
(CMOS) implementations is based on their lower cost (whenever the state-of-theart
sub-micron VLSI processes are not required) and design portability. Furthermore,
CMOS technologies allow the possibility of integration with micro electro mechanical
systems (MEMS) [33]. These are the most important reasons that have directed
the semiconductor industry towards CMOS mixed signal designs, and place CMOS
technologies as the leader in the microelectronics semiconductor industry [34].
2.2 Techniques to reduce the Power Consumption
and the Voltage Supply
Dierent techniques have been developed to reduce power consumption and
supply voltage in analog circuits.
2.2.1 Techniques for Voltage Reduction
The most popular techniques for voltage reduction are as follows:
1. Circuits with rail-to-rail operating range: This group includes all the techniques
that are meant to extend the voltage ranges of the signals. Most of these
techniques are based on redesigning the input and output stages in order to
increase their linear range [35]. In these topologies the transistors have to be
biased in those regions that optimise the operating range. Since the voltage
constraints are more restrictive, in order to get the devices working in a certain
region, sometimes it is necessary to shift the voltage levels.
15
2. Technique of cascading stages, instead of a single cascode stage: Conventional
circuit topologies stacked cascode transistors in order to obtain the high out-
put resistances and gains required by certain structures such as OPAMPs and
OTAs. However, stacking transistors in a given circuit branch makes the volt-
age requirements more demanding for the entire cell. The solution for this is
either to reduce the voltage requirements for the transistors, or to substitute
each single stage for a cascade of them, in such a way that the total gain at
the output would be the product of all the single ones (whenever high gain
is needed). If the latter route is taken, there is an added problem related to
frequency stabilisation [36]. Furthermore, as the number of branches increases,
so too does the power consumption. Therefore a compromise solution would be
to still use cascode transistors, but try to minimise their voltage requirements
within the whole operating range.
3. Supply multipliers: Charge pumps can be used to scale up the power supply
voltage for certain analog cells, while still keeping the low supply voltage value
for the digital blocks [37]. The main drawback of this technique is that it
requires large capacitors which take a large silicon area, a considerable overhead
for circuits using this technique. In addition, the extra power consumption can
be considerable.
4. Nonlinear processing of the signals: Most practical electronic systems are de-
signed to process signals in a linear form. However, the fact that the in-
put/output relationship between two variables has to be linear does not mean
that internally the system must be linear as well [38]. The fundamental devices
constituting the blocks are transistors which are inherently nonlinear. Tradi-
tional circuit techniques tried to linearise the behavioural laws of the devices
with more or less complicated topologic solutions that in most cases unavoid-
ably increased the power consumption. The idea behind nonlinear techniques
is to exploit the nonlinear I/V characteristics of the transistors to process the
signals more eciently.This technique can be applied by using Translinear prin-
ciple which is explained in detail in forthcoming sections.Next chapter shows
how this can be applied to the ciruits like multipliers
16
2.2.2 Techniques for Current Reduction
The main circuit design techniques oriented towards reduction of current are as:
1. Adaptive Biasing: This technique is based on using a non-static current bias to
optimise the power consumption according to signal demands [39].
2. Subthreshold Biasing: Another way to reduce the current levels and hence the
power is by using MOS transistors biased in the weak inversion region driving
very low current levels [40].In the next section we will explian this mode of
operation.In Chapter 3 we will see how this mode combined with the translinear
principle is helpful in achieving the Low Voltage Low Power circuits that can
be used in Neuromorphic circuit design.
2.3 The MOS Transistor in Weak Inversion
In this section we will explore the behavior of the MOS transistor in the sub-
threshold regime where the channel is weakly inverted. This will allow us to model
transistors operating with small gate voltages, where the strong inversion model er-
roneously predicts zero current. The strong inversion MOSFET model makes the
assumption that the inversion charge Q
I
goes to zero when the gate voltage drops be-
low the threshold voltage(See Figure.2.1(a)). This is not quite true. Below threshold,
(a) On Linear Axis (b) On Log Axis
Figure 2.1: Gate Voltage Vs Charge
17
the channel charge drops exponentially with decreasing gate voltage.This exponen-
tial relationship becomes clearer if we redraw the above gure with a logarithmic y
axis(see Figure.2.1(b)). From this gure we can say that the weak inversion as the
region where Q
I
is an exponential function of gate voltage, strong inversion as the
region where Q
I
is a linear function of gate voltage, and moderate inversion as a
transition region between the two.
In weak inversion, the inversion layer charge is much less than the depletion region
charge:Q
I
Q
B
in weak inversion. Since the substrate is weakly doped, Q
B
is small,
and there is not enough charge in the channel to generate a signicant electric eld to
pull electrons from the source to the drain. Current ows by diusion, not by drift.
The inversion charge in the channel, while small, is an exponential function(because
of Fermi-Dirac Statistics) of the barrier height. The barrier height represents the sur-
face potential
s
. In weak inversion the surface potential is at it does not change
over the length of the channel. The surface potential can be modeled fairly accu-
rately by considering the capacitive divider between the oxide capacitance C
ox
and
the depletion capacitance C
dep
.
Figure 2.2: Representation of Capacitances
Using the equation for a capacitive divider and assuming that V
B
= 0, we nd
that:

s
= V
G
(2.1)
where - the gate coupling coecient represents the coupling of the gate to the
18
surface potential:
=
C
ox
C
ox
+C
dep
(2.2)
The depletion capacitance stays fairly constant over the subthreshold region, and
kappa is usually considered to be constant, although it increases slightly with gate
voltage. In modern CMOS processes, kappa ranges between 0.6 and 0.8. It can have
slightly dierent values for pMOS and nMOS devices. A good, all-around approxi-
mation for kappa (unless another value is given) is

= 0.7.
Now when V
DS
> 0, then the important parameter is the concentration of carriers
a channel level.Since the source is at higher potential than drain, electrons diuse
from the source to the drain. The charge concentration in the source(x = 0) and the
drain (x = L) is given by:
|Q

I0
| exp
_
V
S
V
G
U
T
_
(2.3)
|Q

IL
| exp
_
V
D
V
G
U
T
_
(2.4)
whereU
T
is the thermal voltage:
U
T

kT
q

= 26mV at room temperature (2.5)


We know that in diusion, particle motion is proportional to the concentration
gradient. The concentration of electrons decreases linearly from the source to the
drain(i.e., concentration gradient is constant), so we can write an expression for the
drain current as
I
D
= WD
n
(Q

I0
Q

IL
)
L
=
W
L

n
U
T
(Q

I0
Q

IL
) (2.6)
This will lead us to the expression for the drain current in a subthreshold MOS-
FET:
I
D
= I
0
W
L
e
V
G
U
T
_
e

V
S
U
T
e

V
D
U
T
_
(2.7)
where I
0
is a process dependent caonstant.For nFETs,
I
0n

2
n
C

ox
U
2
T

.exp
_
V
T0n
U
T
_
(2.8)
19
Typical values of I
0n
range from 10
15
A to 10
12
A.Rearranging the terms and
rewriting the Equation.2.7 for the drain current as
I
D
= I
0
W
L
exp
_
V
G
V
S
U
T
__
1 exp
_

V
DS
U
T
__
(2.9)
Notice that when exp(V
DS
/U
T
) << 1, the last term is approximately equal to
one, and can be ignored. This occurs (to within 2%) for V
DS
> 4U
T
, since e
4

= 0.018.
The expression for drain current then simplies to:
I
D
= I
0
W
L
exp
_
V
G
V
S
U
T
_
for V
DS
> 4U
T
(saturation) (2.10)
At room temperature, 4U
T

= 100mV , an easy value to remember. It is quite
Figure 2.3: V
DS
Vs I
D
characteristics for Subthreshold MOS
easy to keep a subthreshold MOSFET in saturation, and the V
DS
required to do so
does not depend on V
GS
as is the case above threshold(see Figure.2.3). This is very
advantageous for low-voltage designs.
Another dierence between subthreshold and above threshold operation is the
way I
D
changes as we increase V
GS
. In a weakly-inverted FET, the current increases
exponentially. In a strongly-inverted FET, the current increases quadratically (square
law). This can be understood by looking at a plot of I
D
vs. V
GS
in two ways: with a
linear I
D
axis and with a logarithmic I
D
axis as shown in Figure.2.4
20
(a) On Linear Axis (b) On Log Axis
Figure 2.4: V
GS
Vs I
D
characteristics for Subthreshold MOS
The transconductance of a subthreshold MOSFET is easily derived and found out
to be:
g
m
=
I
D
U
T
(2.11)
Subthreshold MOS and BJT
Subthreshold MOSFETs behave similarly to bipolar junction transistors (BJTs).
The collector current of an npn bipolar transistor exhibits an exponential dependence
on base-to-emitter voltage:
I
C
= I
S
exp
_
V
BE
U
T
_
(2.12)
A bipolar transistor has a transconductance of g
m
= I
C
/U
T
, which is equivalent
to the expression for a subthreshold MOSFET if we set = 1. Of course, a MOSFET
doesnt pull any current through its gate like a bipolar transistor pulls through its base.
This can make circuit design much easier. This similarity we will use in the design
of translinear loops and have applied in the multiplier and WTA deisgn presented in
further chapters.
21
2.4 The Current Conveyor
This is one of the very basic current mode circuit commonly used in neuro-
morphic systems. The current conveyor block can be used to replace the traditional
operational amplier.
In voltage-mode circuits, the main building block used to add, subtract, amplify,
attenuate, and lter voltage signals is the operational amplier. In current-mode
circuits, the analogous building block is the current conveyor. [41]
Figure 2.5: Current Conveyor
The original current conveyor (Figure.2.5) was a three-terminal device (two input
terminals X and Y and one output terminal Z) with the following properties:
1. The potential at its input terminal (X) is equal to the voltage applied at the
other input terminal (Y).
2. An input current that is forced into node X results in an equal amount of current
owing into node Y.
3. The input current owing into node X is conveyed to node Z, which has the
characteristics of a high output impedance current source.
The term conveyor refers to the third property above: Currents are conveyed from
the input terminal to the output terminal, while decoupling the circuits connected to
these terminals.
The simplest CMOS implementation of a current conveyor is a single MOS tran-
sistor (Figure.2.6(a)). When used as a current buer, it conveys current from a
low impedance input node X to a high impedance output node Z: And when used
as a source-follower, its source terminal X can follow its gate Y. A more elaborate
22
Figure 2.6: Current Conveyor Implementations (a) Single MOS transistor Current Con-
veyor. (b) Two MOS transistors current controlled conveyor
current-controlled conveyor is shown in Figure.2.6(b). This basic two transistor cur-
rent conveyor is used in many neuromorphic circuits [42] [43] and is a key component
of the current-mode winner-take-all circuit that is analyzed in Section 5.2.2. It has
the desirable property of having the voltage at node X controlled by the current being
sourced into node Y. If the transistors are operated in the subthreshold domain, the
monotonic function that links the voltage at the node X to the current being sourced
into Y is a logarithm. As voltages at the nodes Y and X are decoupled from each
other, V
x
can be clamped to a desired constant value by choosing appropriate values
of I
y
.
Sedra and Smith (1970) reformulated the denition of the current conveyor, de-
scribing a new circuit that combines both voltage and current-mode signal processing
characteristics. This new type of current conveyor (denoted as conveyor of class II)
is represented by the symbol shown in Figure.2.5 and its input-output characteristics
are dened as
_

_
V
x
I
y
I
z
_

_
=
_

_
1 0 0
0 0 0
0 0 1
_

_
_

_
V
y
V
z
I
x
_

_
(2.13)
The input voltages V
x
and V
y
are linked by a unity gain relationship(V
x
= V
y
);
the terminal Y has innite impedance (I
y
= 0) and the current forced into node X is
conveyed to the high impedance output node Z with a 1 gain.
23
Chapter 3
Translinear Circuits in
Subthreshold MOS
3.1 Introduction
In 1975, Barrie Gilbert coined the word translinear to describe a class of circuits
whose large-signal behavior hinges on the extraordinarily precise exponential current-
voltage characteristic of the bipolar transistor and the intimate thermal contact and
close matching of monolithically integrated devices [44]. The functions performed
by these fundamentally large-signal circuitsincluding multiplication ,wideband sig-
nal amplication, and various power-law relationships were utterly incomprehensible
from the customary linear-circuit picture of the bipolar transistor as a linear cur-
rent amplier whose key property is its forward current gain, . At the same time,
Gilbert also succinctly enunciated a general circuit principle, the translinear princi-
ple (TLP), by which we can analyze the (steady-state) large-signal characteristics of
such circuits quickly, usually with only a few lines of algebra, by considering only the
currents owing in the circuits.
The word translinear derives from a contraction of one way of stating the ex-
ponential current voltage characteristic of the bipolar transistor that is central to
the functioning of these circuitsthat is, the bipolar transistors transconductance is
linear in its collector current. Gilbert also meant the word to convey the notion
of analysis and design techniques (e.g., the translinear principle) that bridge the gap
24
between the well-established domain of linear-circuit design and the largely uncharted
domain of nonlinear-circuit design, for which precious little can be said in general.
The translinear principle is essentially a translation through the exponential current-
voltage relationship of a linear constraint on the voltages in a circuit (i.e., Kirchhos
voltage law) into a product-of-power-law constraint on collector currents owing in
the circuit.
In the biologically motivated computational paradigm, high processing throughput
is attained through a tradeo between massive parallelism and lower speed in the
circuits and therefore subthreshold CMOS operation is possible. Such architectures
often necessitate the computation of linear and non-linear functions, and if a current-
mode design methodology is adopted, the translinear principle oers an eective way
for synthesizing circuits and systems [45] [1] [20].
3.2 The Translinear Elements
The Translinear Element(TE) shown in Figure.3.1a. is a IGBT device, which is
a hybrid bipolar/MOS device. This element is called as the ideal translinear element.
We shall assume that the ideal TE produces a collector current, I , that is exponential
in its gate-to-emitter voltage, V , and is given by
I = I
s
e
V/U
T
(3.1)
where I
s
is a pre-exponential scaling current, is a dimensionless constant that
scales I
s
proportionally, is a dimensionless constant that scales the gate-to-emitter
voltage, V , and U
T
is the thermal voltage, kT/q. To demonstrate that the ideal TE
is translinear in the rst sense of the word that we discussed in Section.3.1, we can
calculate its transconductance by simply dierentiating Equation.3.1 with respect to
V to obtain
g
m
=
I
V
= I
s
e
V/U
T
.

U
T
=
I
U
T
(3.2)
25
Figures.3.1b through 3.1d show four practical circuit implementations of the ideal
TE. The rst of these TEs is the pn junction diode, shown in Figure.3.1b. Although
the forward-biased diode does have an exponential currentvoltage characteristic, it
is a two-terminal device and does not, strictly speaking, have a transconductance.
Moreover, diodes seldom actually appear in translinear circuits; instead, for the sake
of device matching, we almost invariably use diode-connected transistors in place of
diodes. Nonetheless, for simplicity, many presentations of the translinear principle
begin by considering a loop of diodes. For the diode, corresponds to the relative
area of the pn junction and is typically very near to unity.
Figure 3.1: Translinear Elements.(a)Circuit symbol for ideal TE. (b)a diode (c)an npn
BJT (d)a subthreshold MOSFET
The bipolar transistor, shown in Figure.3.1c, biased into its forward active region
is considered by most people to be the quintessential TE. The bipolar transistor
commonly exhibits a precise exponential relationship between its collector current
and its base-to-emitter voltage over more than eight decades of current. For the
bipolar transistor, corresponds to the relative area of the emitterbase junction and
is typically close to one. The main limitation of the bipolar transistor as a TE is
the existence of a nite base current, which is often what limits the range of usable
26
current levels in bipolar translinear circuits.
The subthreshold MOS transistor with its source and bulk connected together, as
shown in Figigure.3.1d, biased into saturation also has an exponential currentvoltage
characteristic. In this case, corresponds to the W/L ratio of the MOS transistor
and is equal to , which is the incremental capacitive-divider ratio between the
gate and the channel. The requirement that the source and bulk be shorted together
stems from the fact that the gate and source do not have the same eect on the energy
barrier (i.e., the source-to-channel potential) that controls the ow of current in the
channel. The source potential directly aects this barrier height, whereas the gate
couples capacitively into the channel and only partially determines (i.e., with a weight
of ) the channel potential. The bulk also couples into the channel capacitively and
partially determines the channel potential (i.e., with a weight of 1 ). By connecting
the source and bulk together, we can use the bulk in opposition to the source to reduce
the sources net eectiveness at controlling the barrier height to match precisely the
eectiveness of the gate.
3.3 The Translinear Principle
In this section, we shall derive the translinear principle for a loop of ideal TEs
and illustrate its use in analyzing translinear circuits. We shall then consider a loop of
subthreshold MOS transistors with their bulks all connected to the common substrate
potential to determine how the translinear principle is modied for such devices by
the body eect.
3.3.1 Translinear Loops of Ideal TE
Consider the closed loop of N ideal TEs, shown in Figure3.2. The large arrow
shows the clockwise direction around the loop. If the emitter arrow of a TE points
in the clockwise direction, we classify the TE as a clockwise element. If the emitter
arrow of a TE points in the counterclockwise direction, we classify the TE as a
counterclockwise element. CW is the set of clockwise-element indices and CCW
is the set of counterclockwise-element indices. As we proceed around the loop in
27
Figure 3.2: A Conceptual Translinear Loop conprising of N ideal TEs
the clockwise direction, the gate-to-emitter voltage of a counterclockwise element
corresponds to a voltage increase, whereas the gate-to-emitter voltage of a clockwise
element corresponds to a voltage drop. One way of stating Kirchhos voltage law
is that the sum of the voltage increases around a closed loop is equal to the sum of
the voltage drops around the loop. Consequently, by applying Kirchhos voltage law
around the loop of TEs shown in Figure.3.2, we have

nCCW
V
n
=

nCW
V
n
(3.3)
By Solving Equation.3.1 for the V in terms of I and substituting the resulting ex-
pression for each V
n
in Equation.3.3, we obtain

nCCW
U
T

log
I
n

n
I
s
=

nCW
U
T

log
I
n

n
I
s
(3.4)
Assuming that all TEs are operating at the same temperature, we can cancel the
common factor of U
T
/ in all of the terms in Equation.3.4 to obtain

nCCW
log
I
n

n
I
s
=

nCW
log
I
n

n
I
s
(3.5)
28
Because log x + log y = log xy, we can rewrite Equation.3.5 as
log

nCCW
I
n

n
I
s
= log

nCW
I
n

n
I
s
(3.6)
By exponentiating both sides of Equation.3.6 we get

nCCW
I
n

n
I
s
=

nCW
I
n

n
I
s
which we rearrange as

nCCW
I
n

n
= I
N
CCW
N
CW
s

nCCW
I
n

n
(3.7)
where N
CCW
and N
CW
denote respectively the number of counterclockwise elements
and the number of clockwise elements. Now, it is easy to see that, if N
CCW
= N
CW
,
then Equation.3.7 reduces to

nCCW
I
n

n
=

nCW
I
n

n
(3.8)
which has no remaining dependence on temperature or device parameters. Equation.3.8
is the translinear principle, which can be stated as follows.
In a closed loop of ideal TEs comprising an equal number of clockwise and
counterclockwise elements, the product of the (relative) current densities
owing through the counterclockwise elements is equal to the product of
the (relative) current densities owing through the clockwise elements.
If each TE in the loop has the same value of , and if N
CCW
= N
CW
, then
Equation.3.8 reduces to

nCCW
I
n
=

nCCW
I
n
(3.9)
Equation.3.9 is an important special case of the translinear principle that can be
stated as follows.
In a closed loop of identical ideal TEs comprising an equal number of
clockwise and counterclockwise elements, the product of the currents ow-
ing through the counterclockwise elements is equal to the product of the
currents owing through the clockwise elements.
29
3.3.2 Translinear Loops in Subthreshold MOS Transistors.
Consider the closed loop of N saturated subthreshold MOS transistors whose
bulks are all connected to a common substrate potential, shown in Figure.3.3. Here,
V
n
represents the gate-to-source voltage of the nth MOS transistor, and U
n
is the
voltage on the nth node relative to the substrate potential. Again, the large arrow
in Figure.3.3, indicates the clockwise direction around the loop. We shall consider
a clockwise element to be one whose gate-to-source voltage is a voltage drop in the
clockwise direction around the loop. We shall consider a counterclockwise element to
be one whose gate-to-source voltage is a voltage increase in the clockwise direction
around the loop.
Figure 3.3: A Translinear Loop of Subthreshold MOS Transistors with their bulks tied to
a common substrate potential.
Recalling from Chapter 2 that the channel current, I, of an nMOS transistor,
operating in subthreshold, is given by
I = I
0
e
V
g
/U
T
_
e
V
s
/U
T
e
V
d
/U
T
_
(3.10)
30
where V
g
is the gate-to-bulk voltage, V
s
is the source-to-bulk voltage, V
d
is the drain-
to-bulk potential, is the W/L ratio of the transistor, I
0
is the subthreshold pre-
exponential current factor, is the (incremental) capacitive divider ratio between
the gate and the channel, and U
T
is the thermal voltage, kT/q. If the drain-to-
source voltage is larger than about 4U
T
, then the transistor is saturated. Under these
conditions, the second term in the parenthesis in Equation.3.10 is negligible compared
to the rst one, which reduces Equation.3.10 to
I = I
0
e
(V
g
V
s
)/U
T
which has no dependence on the drain-bulk potential.
Thus ,if the nth MOS transistor is a clockwise element, we have that
I
n
=
n
I
0
e
(U
n1
U
n
)/U
T
which we can rearrange to nd that
e
U
n
/U
T
=
_
e
U
n1
/U
T
_

n
I
0
I
n
_
(3.11)
Equation.3.11 expresses a recurrence relationship between the nth node voltage to
the (n1)st node voltage for clockwise elements. On the other hand, if the nth MOS
transistor is a counterclockwise element, we have that
I
n
=
n
I
0
e
(U
n
U
n1
)/U
T
which we rearrange to nd that
e
U
n
/U
T
=
_
e
U
n1
/U
T
_
1/
_
I
n

n
I
0
_
1/
(3.12)
Equation.3.12 likewise expresses a recurrence relationship between the nth node volt-
age and the (n 1)st node voltage for counterclockwise elements.
We can use the recurrence relationships, expressed in Equations.3.11 and 3.12, to
build up the translinearloop constraint equation for the subthreshold MOS translin-
ear loop, shown in Figure.3.3, as follows. We begin at one of the nodes in the loop,
31
say U
0
, and proceed sequentially around the loop in the clockwise direction, recur-
sively applying Equation.3.11 or Equation.3.12 to get to the next node, depending
on whether the current element is clockwise or counterclockwise. When we encounter
a clockwise element, we raise the partially formed translinear-loop equation to the
power and multiply it by
n
I
0
/I
n
, as expressed in Equation.3.11. When we encounter
a counterclockwise element, we raise the partially formed translinear-loop equation
to the 1/ power and multiply it by (I
n
/
n
I
0
)
1/
, as expressed in Equation.3.12.
Finally, when we return to the node with which we started, we stop and simplify the
resulting expression.
3.4 Examples of Translinear Ciruits
Figure 3.4: A Translinear Circuit Topology of Subthreshold MOS Transistors.(a)Stacked
Loop (b)Alternating Loop
In the above Figure.3.4, when we apply the Translinear principle explained in
Section.3.3.2 we get for Figure.3.4 as
_
I
1

1
_
1/
_
I
2

2
_
. .
CCW
=
_
I
3

3
_
1/
_
I
4

4
_
. .
CW
(3.13)
Similarly for Figure.3.4 we get,
_
I
1

1
_
1/
_
I
3

3
_
. .
CCW
=
_
I
2

2
_
1/
_
I
4

4
_
. .
CW
(3.14)
32
Now let us consider another example of Translinear Circuits using MOS as shown
in Figure.3.5
Figure 3.5: A Translinear Circuit using Subthreshold MOS Transistors and the output
equations.
In the next chapter in Section.4.2.1 we have designed the Four Quadrant Multiplier
using this Translinear Principle. Also in the last Chapter 5 we have used the same
principle for the design of WTA circuits.
33
Chapter 4
Neuron Circuit Design
Before we start to design the circuit that emulate the behavior of the neuron, we
need to have basic knowledge of how the Neurons transmit and receive the information
and its structure.
4.1 Some Biology !
The term Neuron was coined by the German anatomist Heinrich Wilhelm
Waldeyer. The neurons place as the primary functional unit of the nervous sys-
tem was rst recognized in the early 20th century through the work of the Spanish
anatomist Santiago Ramon y Cajal. The number of neurons in the brain varies dra-
matically from species to species. One estimate puts the human brain at about 100
billion (10
11
) neurons and 100 trillion (10
14
) synapses. Another estimate is 86 bil-
lion neurons, of which 16.3 billion are in the cerebral cortex, and 69 billion in the
cerebellum.
A Neuron is an electrically excitable cell that processes and transmits informa-
tion by electrical and chemical signaling. Chemical signaling occurs via synapses,
specialized connections with other cells. Neurons connect to each other to form neu-
ral networks. Neurons are the core components of the nervous system, which includes
the brain, spinal cord, and peripheral ganglia. A number of specialized types of
neurons exist: sensory neurons respond to touch, sound, light and numerous other
stimuli aecting cells of the sensory organs that then send signals to the spinal cord
34
and brain. Motor neurons receive signals from the brain and spinal cord, cause muscle
contractions, and aect glands.Interneurons connect neurons to other neurons within
the same region of the brain or spinal cord.
4.1.1 Overview of Neuron:
A Neuron is a specialized type of cell found in the bodies of most animals (all
members of the group Eumetazoa). Only sponges and a few other simpler animals
have no neurons. The features that dene a neuron are electrical excitability and the
presence of synapses, which are complex membrane junctions that transmit signals
to other cells. The bodys neurons, plus the glial cells that give them structural and
metabolic support, together constitute the nervous system. In vertebrates, the ma-
jority of neurons belong to the central nervous system, but some reside in peripheral
ganglia, and many sensory neurons are situated in sensory organs such as the retina
and cochlea.
Although neurons are very diverse and there are exceptions to nearly every rule,
it is convenient to begin with a schematic description of the structure and function
of a typical neuron. A typical neuron is divided into three parts: the soma or cell
body, dendrites, and axon. The soma is usually compact; the axon and dendrites are
laments that extrude from it. Dendrites typically branch profusely, getting thinner
with each branching, and extending their farthest branches a few hundred micrometers
from the soma. The axon leaves the soma at a swelling called the axon hillock, and
can extend for great distances, giving rise to hundreds of branches. Unlike dendrites,
an axon usually maintains the same diameter as it extends. The soma may give
rise to numerous dendrites, but never to more than one axon. Synaptic signals from
other neurons are received by the soma and dendrites; signals to other neurons are
transmitted by the axon. A typical synapse, then, is a contact between the axon of
one neuron and a dendrite or soma of another. Synaptic signals may be excitatory
or inhibitory. If the net excitation received by a neuron over a short period of time
is large enough, the neuron generates a brief pulse called an action potential, which
originates at the soma and propagates rapidly along the axon, activating synapses
onto other neurons as it goes.
35
Many neurons t the foregoing schema in every respect, but there are also excep-
tions to most parts of it. There are no neurons that lack a soma, but there are neurons
that lack dendrites, and others that lack an axon. Furthermore, in addition to the
typical axodendritic and axosomatic synapses, there are axoaxonic (axon-to-axon)
and dendrodendritic (dendrite-to-dendrite) synapses.
The key to neural function is the synaptic signaling process, which is partly elec-
trical and partly chemical. The electrical aspect depends on properties of the neurons
membrane. Like all animal cells, every neuron is surrounded by a plasma membrane,
a bilayer of lipid molecules with many types of protein structures embedded in it.
A lipid bilayer is a powerful electrical insulator, but in neurons, many of the pro-
tein structures embedded in the membrane are electrically active. These include ion
channels that permit electrically charged ions to ow across the membrane, and ion
pumps that actively transport ions from one side of the membrane to the other. Most
ion channels are permeable only to specic types of ions. Some ion channels are
voltage gated, meaning that they can be switched between open and closed states
by altering the voltage dierence across the membrane. Others are chemically gated,
meaning that they can be switched between open and closed states by interactions
with chemicals that diuse through the extracellular uid. The interactions between
ion channels and ion pumps produce a voltage dierence across the membrane, typi-
cally a bit less than 1/10 of a volt at baseline. This voltage has two functions: rst,
it provides a power source for an assortment of voltage-dependent protein machinery
that is embedded in the membrane; second, it provides a basis for electrical signal
transmission between dierent parts of the membrane.
Neurons communicate by chemical and electrical synapses in a process known as
synaptic transmission. The fundamental process that triggers synaptic transmission
is the action potential, a propagating electrical signal that is generated by exploiting
the electrically excitable membrane of the neuron. This is also known as a wave of
depolarization.
36
4.1.2 Anatomy of Neuron
Neurons are highly specialized for the processing and transmission of cellular
signals. Given the diversity of functions performed by neurons in dierent parts of
the nervous system, there is, as expected, a wide variety in the shape, size, and
electrochemical properties of neurons. For instance, the soma of a neuron can vary
from 4 to 100 micrometers in diameter.
Figure 4.1: Typical Structure of Neuron
The Soma (Cell Body) is the central part of the neuron. It contains the nucleus
of the cell, and therefore is where most protein synthesis occurs. The nucleus
ranges from 3 to 18 micrometers in diameter.
The Dendrites of a neuron are cellular extensions with many branches, and
metaphorically this overall shape and structure is referred to as a dendritic
tree. This is where the majority of input to the neuron occurs.
The Axon is a ner, cable-like projection that can extend tens, hundreds, or
even tens of thousands of times the diameter of the soma in length. The axon
37
carries nerve signals away from the soma (and also carries some types of infor-
mation back to it). Many neurons have only one axon, but this axon mayand
usually willundergo extensive branching, enabling communication with many
target cells. The part of the axon where it emerges from the soma is called the
axon hillock. Besides being an anatomical structure, the axon hillock is also the
part of the neuron that has the greatest density of voltage-dependent sodium
channels. This makes it the most easily-excited part of the neuron and the spike
initiation zone for the axon: in electrophysiological terms it has the most nega-
tive action potential threshold. While the axon and axon hillock are generally
involved in information outow, this region can also receive input from other
neurons.
The axon terminal contains Synapses, specialized structures where neurotrans-
mitter chemicals are released to communicate with target neurons.
4.1.3 Synapses for Connectivity
Neurons communicate with one another via Synapses, where the axon terminal
or en passant boutons (terminals located along the length of the axon) of one cell im-
pinges upon another neurons dendrite, soma or, less commonly, axon. Neurons such
as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making
connections with tens of thousands of other cells; other neurons, such as the magno-
cellular neurons of the supraoptic nucleus, have only one or two dendrites, each of
which receives thousands of synapses. Synapses can be excitatory or inhibitory and
either increase or decrease activity in the target neuron. Some neurons also com-
municate via electrical synapses, which are direct, electrically-conductive junctions
between cells.
In a Chemical synapse, the process of synaptic transmission is as follows: when an
action potential reaches the axon terminal, it opens voltage-gated calcium channels,
allowing calcium ions to enter the terminal. Calcium causes synaptic vesicles lled
with neurotransmitter molecules to fuse with the membrane, releasing their contents
into the synaptic cleft. The neurotransmitters diuse across the synaptic cleft and
activate receptors on the postsynaptic neuron.
38
4.1.4 Mechanisms for Propagating Action Potentials
In 1937, John Zachary Young suggested that the squid giant axon could be
used to study neuronal electrical properties. Being larger than but similar in nature
to human neurons, squid cells were easier to study. By inserting electrodes into the
giant squid axons, accurate measurements were made of the membrane potential.
The cell membrane of the axon and soma contain voltage-gated ion channels that
allow the neuron to generate and propagate an electrical signal (an action potential).
These signals are generated and propagated by charge-carrying ions including sodium
(Na
+
), potassium (K
+
), chloride (Cl

), and calcium (Ca


+
2
).
Figure 4.2: Information Flow through Neurons: A Signal propagating down an axon to
the cell body and dendrites of the next cell.
There are several stimuli that can activate a neuron leading to electrical activity,
including pressure, stretch, chemical transmitters, and changes of the electric po-
tential across the cell membrane. Stimuli cause specic ion-channels within the cell
membrane to open, leading to a ow of ions through the cell membrane, changing the
membrane potential.
Thin neurons and axons require less metabolic expense to produce and carry action
39
potentials, but thicker axons convey impulses more rapidly. To minimize metabolic
expense while maintaining rapid conduction, many neurons have insulating sheaths
of myelin around their axons. The sheaths are formed by glial cells: oligodendrocytes
in the central nervous system and Schwann cells in the peripheral nervous system.
The sheath enables action potentials to travel faster than in unmyelinated axons of
the same diameter, whilst using less energy. The myelin sheath in peripheral nerves
normally runs along the axon in sections about 1 mm long, punctuated by unsheathed
nodes of Ranvier, which contain a high density of voltage-gated ion channels. Multiple
sclerosis is a neurological disorder that results from demyelination of axons in the
central nervous system.
Some neurons do not generate action potentials, but instead generate a graded
electrical signal, which in turn causes graded neurotransmitter release. Such nonspik-
ing neurons tend to be sensory neurons or interneurons, because they cannot carry
signals to long distances.
4.2 Neuron Circuit Design
In the analog VLSI implementation of articial neural networks we can identify
(among others) the following goals [29]:
Low Power consumption
Low Voltage operation
High Accuracy through massive overall parallelism
Small size to accommodate huge number of neurons
Neuron circuit block diagram, which is shown in Figure.4.3a, is obtained by the
combination of the building blocks. Briey, input x is multiplied by its weight w
then the sum of these products are applied to an activation function and the output
y is obtained. The multiplication, which is mentioned in Figure.4.3a, is obtained in
Figure.4.3b by using a multiplier with two inputs x and w. The summation and the
40
Figure 4.3: (a)Neuron Model, (b) Neuron Circuit
activation function (Figure.4.3a) are realized by using sigmoidal circuit as shown in
Figure.4.3b.
Thus when implementing articial neural networks in analog hardware, in partic-
ular synaptic circuits design is a very challenging task because large synaptic arrays
are needed. Usually massive parallel systems are to be integrated on the same silicon
die, then, the synaptic circuit power consumption strongly determines the overall
chip power consumption: if N is the number of the network inputs and/or neurons,
the number of synapses is roughly proportional to N
2
. Same considerations can be
done for the size (e.g. silicon area). In particular, in the feedforward (i.e. recall)
phase, the synaptic circuit is basically a four-quadrant analog multiplier. Though
many dierent approaches can be envisaged for the analog circuit implementation of
four quadrant multipliers [46], following the previous considerations, and taking into
account a CMOS technology to reduce the silicon area, we identied the following
design guidelines:
Dierentially Coding the Information which implies high noise and interference
immunity;
Current Mode of Operation which implies: i) high robustness with respect to
spread of the technological parameters; ii) wide dynamic range of signals; iii)
easy implementation of sums;
Weak Inversion Region of Operation of Transistors which implies low power /
low voltage operation;
41
Translinear Circuits: Translinear circuits are versatile and ecient (i.e. small
power consumption) circuits [29].
4.2.1 Multiplier(Synapse) Circuit Design
Taking into considerations the design guidelines beforehand mentioned above,
we design the multiplier circuit in the translinear loop with the MOS transistors
in weak inversion region. In Chapter 2 MOS in weak inversion and the translinear
principle are explained in detail which in revisited here briey.
Expressing the channel current in a MOS transistor biased in weak inversion:
I
DS
= I
DC
e
V
GS
V
th
U
t
_
1 e
V
DS
U
t
_
(4.1)
where I
DC
is a specic current term, V
th
is the threshold voltage, is the weak
inversion slope factor. The mismatch between devices causes random variations of
the values of I
DCi
and V
thi
[47]. From Equation.4.1 and taking into account a generic
transistor i of a translinear loop, we can dene

i
= (V
DSi
) = 1 e

V
DSi
U
t
(4.2)
as a generic error term whose value depends on the drain to source voltage value. If,
due to mismatch between devices, the terms I
DCi
and V
thi
experience variations (i.e.
errors) of I
DCi
and V
thi
respectively from their nominal/typical values, then we
can write for a generic transistor i:
I
DSi
= (1 + I
DCi
)I
DC
.e
V
GSi
V
th
U
t
.e

V
thi
U
t

i
(4.3)
Please note that in Equation.4.3, the term I
DCi
represents the percentage varia-
tion with respect of the nominal value; on the other hand V
thi
represents an absolute
variation.
In other words:
I
DC real
= (1 + I
DC
)I
DC
, V
th real
= V
th
+ V
th
42
Figure 4.4: Generic(alternate) Translinear Loop
Let us take into account the basic translinear loop shown in Figure.4.4. After
some mathematical computations one can obtain:
I
DS1
.IDS3 = I
DS2
.I
DS4
(4.4)
where we have approximated to the value of one. In the previous equation a non
linearity factor was introduced which takes into account the eects of mismatch
between the devices belonging to the translinear loop. The term is dened as
follows:
=
(1 + I
DC1
)(1 + I
DC3
)
(1 + I
DC2
)(1 + I
DC4
)
.

4
.e

V
th1
+V
th3
V
th2
V
th4
U
t
(4.5)
The non linearity factor is given, besides by the spread of the technological pa-
rameters I
DCi
and V
thi
, by the bias point value through the terms
i
= 1 e

V
DSi
U
t
.
Henceforth we will consider all terms
i
equal to 1. The error given by this approx-
imation is fairly low: in fact if, let say, V
DS
is equal to only 100 mV, the error is in
the order of magnitude of about 0.05%. Please note that the non linearity term
depends also on the topology of the circuit and on the layout design (i.e. matching
structures). In particular, = 1 in the case of ideal matching between the devices of
the translinear loop. In the following subsections we will apply the previous model
to the four quadrant current mode translinear multiplier circuit.
43
Figure 4.5: Four Quadrant Multiplier Circuit Topology for the Synapse implementation
In the following we will consider input (I
X
and I
W
) and output (I
OUT
) signals as
dierential and balanced current mode signals(see Fig.4.5):
I
+
X
= (1 + x)
I
B
2
(4.6)
I

X
= (1 x)
I
B
2
(4.7)
I
+
W
= (1 + w)
I
B
2
(4.8)
I

W
= (1 w)
I
B
2
(4.9)
where
x & w are the input information carrying variables (1 x 1, 1 w 1);
I
B
is the bias reference current;
I
+
X
& I

X
are the positive and negative input current components;
I
+
W
& I

W
are the positive and negative weight current components;
The transistors M1, M2, M3, M4 when in weak inversion and saturation region form
the Translinear Loop whose drain currents are I
B
, I
+
W
, I
o1
, I
+
X
respectively and can be
written as::
I
+
X
.I
+
W
= I
o1
.I
B
I
o1
=
I
+
X
I
+
W
I
B
(4.10)
The transistors M7, M8, M11, M12 forms the Translinear Loop whose drain currents
44
are I
B
, I

W
, I

X
, I
o2
respectively and can be written as:
I

X
.I

W
= I
o2
.I
B
I
o2
=
I

X
I

W
I
B
(4.11)
The currents I
o1
and I
o2
(from Equations.4.10 and 4.11) are summed at node n2; the
result is the positive single ended term of the output current I
+
OUT
as:
I
+
OUT
= I
o1
+I
o2
=
I
+
X
I
+
W
+I

X
I

W
I
B
(4.12)
In a similar way , the current termI
o3
(the result of the operation of the Translinear
Loop made by transistors M1, M2, M5, M6) is summed at node n1 to the current
term I
o4
(the result of the operation of the Translinear Loop made by transistors M7,
M8, M9, M10). The result is the negative single ended term of the output current
I

OUT
as :
I

OUT
= I
o3
+I
o4
=
I

X
I
+
W
+I
+
X
I

W
I
B
(4.13)
It has been veried through experimental measurements that the translinear loops
are rather insensitive to the value of V
POL
to the large extent. Due to the spread of
the technological parameters, each translinear loop introduces a non linearity term

i
(i = 1 : 4) see Equation.4.5. Thus the output current can be expressed as :
I
OUT
= I
offset
+ [A
x
x +A
w
w +A
I
xw]
I
B
4
(4.14)
where:
A
I
=
1
+
2
+
3
+
4
I
offset
= (
1

3
+
4
)
I
B
4
A
x
=
1
+
2

4
A
x
=
1

2
+
3

4
The proposed multiplier circuit topology is more symmetric and exhibits the following
advantages over the standard current mode MOS Gilbert multiplier [48]: a) The
expression of the output current (see Equation.4.14) does not present any higher
45
order term of the inputs (i.e.x
2
, x
3
, ....w
2
, w
3
, .. ) even in the case that mismatch is
taken into account. b) If the non linearity terms
i
assume similar values (i.e. in
the case of matching inside and between the transistors of translinear loops) then the
terms A
I
, A
x
, A
w
, I
offset
tend to decrease and the overall linearity increases. c) In the
expressions of A
I
, A
x
, A
w
, I
offset
, terms in the form of
i

j
(i = j) are not present.
4.2.2 Implementation & Simulations of Synapse Design
The synapse circuit design described in the section 4.2.1 has been implemented
in the CADENCE software using the spectre tool in gpdk 180nm process technology.
The complete implemented circuit schematic diagram is shown in the Figure.4.8.
The Circuit has been supplied with the Low Voltage of 0.7V using minimum
channel length technology for all MOS transistors. The transistor sizes has been
calculated according to the Inversion Coecient = 0.1. Interested reader refer book
by David M.Binkley [49]. The Transistor sizes are reported in Table.4.1
Table 4.1: Transistor sizes of the 4-Quad Translinear Multiplier(Synapse) shown in
Fig.4.8
Transistor M1 M12 M
p1
, M
p2
Size W [m] 32.4m 1m
Size L [m] 0.18m 12m
In the following measurement results, I
B
was set to 250nA, I
+
OUT
and I

OUT
vary
in the range of [0nA to 250nA], while I
OUT
varies in the range of [-250nA to 250nA].
Figure.4.6 shows the DC measured characteristics of the multiplier. The x input is
on the x-axis, and the w input is used as the parameter. Figure.4.7 shows the DC
measured transfer characteristics of the multiplier in the case: w as input which is on
the x-axis, and the x is used as the swept parameter.One can note that the multiplier
exhibits linear behavior with respect to both the inputs
46
Figure 4.6: Measured DC Transfer Characteristics when w input is used as parameter.
Figure 4.7: Measured DC Transfer Characteristics when x input is used as parameter.
In Figure.4.9 Transient analysis for the sinusoidal inputs for the designed four
quadrant multiplier is done. The w input is set to high frequency of 4KHz sinusoidal
waveform with the peak value of 160nA; the x input was set to low sinusoidal fre-
quency of 100Hz with the peak value of 40nA.The resulting modulated waveform is
shown in the Figure.4.10. The THD i.e Total Harmonic Distortion is calculated for
the 41ms transient analysis using the calculator thd function in the CADENCE. The
calculated thd is found out to be 3, 563%.The Power consumed is 0.58nW, which is
very less.
47
F
i
g
u
r
e
4
.
8
:
O
v
e
r
a
l
l
C
i
r
c
u
i
t
I
m
p
l
e
m
e
n
t
a
t
i
o
n
o
f
t
h
e
F
o
u
r
Q
u
a
d
r
a
n
t
M
u
l
t
i
p
l
i
e
r
f
o
r
t
h
e
S
y
n
a
p
s
e
C
i
r
c
u
i
t
D
e
s
i
g
n
.
48
Figure 4.9: Transient analysis of the input waveforms for the x and w inputs.
Figure 4.10: Output current waveform in the case of waveform modulation when two
dierent sinusoids are applied to the multiplier.
Conclusions for the Synapse Design
Thus we have designed the Synapse circuit using four quadrant multiplier. This
circuit can work at very low voltage supply of 0.7V and the total power consumed
is 0.58nW.Compared with all the recent works, according to our knowledge; this is
the rst attempt to design the circuit of such topology at 180nm process technology
49
working at low voltage of 0.7V . This circuit with the given performance is very much
suited for the implementation of the synapse.
4.2.3 Activation Function Circuit Design
Activation function is the output stage of a neuron which chooses a value of its
output interval according to its input and transmits it as an input for the synapses
of other layer neurons. This activation function can be designed using the dierential
transconductance amplier circuit. For this we should rst see the working of the
dierential pair and then the transconductance dierential amplier.
MOS Dierential Pair
The dierential pair has the same basic structure as the source follower, except
that the bias current I
b
is now shared by two MOSFETs M1 and M2 whose sources
are connected to the drain of the bias MOSFET M
b
, as shown in Figure.4.11. The
sharing of the current between M1 and M2 depends on their respective gate voltages
V 1 and V 2. If all MOSFETs are operated below threshold and in saturation and we
assume that M1 and M2 have the same subthreshold slope factor
n
, we obtain
Figure 4.11: MOS Dierential Pair
I
1
= I
b
e

n
V
1
/U
T
e

n
V
1
/U
T
+ e

n
V
2
/U
T
(4.15)
I
2
= I
b
e

n
V
2
/U
T
e

n
V
1
/U
T
+ e

n
V
2
/U
T
(4.16)
50
Dierential Transconductance Amplier for Activation Function
The two output currents in the dierential pair circuit can be subtracted from
one another to form a single bidirectional output current. The subtraction is per-
formed by connecting a current mirror of the complementary transistor type to the
dierential pair, as shown in Figure.4.12. The resulting circuit is the simplest version
of a dierential transconductance amplier. As long as all MOSFETs stay in satura-
Figure 4.12: MOS Dierential Transconductance Amplier for the implementation of Ac-
tivation Function
tion and the dierential pair is operated below threshold, the output current is given
by
I
out
= I
1
I
2
= I
b
e

n
V
1
/U
T
e

n
V
2
/U
T
e

n
V
1
/U
T
+e

n
V
2
/U
T
= I
b
tanh
_

n
2U
T
(V
1
V
2
)
_
(4.17)
The simulation results for the designed activation function is shown below in the
Figure.4.13.The circuit has been supplied with the same volatge as that of the above
designed multiplier. The supply voltage for this dierntial transconductance amplier
is also 0.7V .
51
Figure 4.13: DC Voltage Characteristics of the Dierential Transconductance Amplier
for the Activation Function
4.2.4 Conclusion
Thus we have designed the four quadrant multiplier for the Synapse implemen-
tation as shown in the Figure.4.8 and the dierential transconductance amplier for
the Activation Function as shown in the Figure.4.12. The single Neuron consists of
these two circuits as shown in the Figure.4.3. The complete circuit of the Neuron
works at low voltage of 0.7V .The power consumed is also very low in the nW; which
makes it suitable, to make array of neurons in each layer for parallel processing of the
signals as in human brain.
52
Chapter 5
Design of WTA Circuit
5.1 Introduction
Winner-take-all is a computational principle applied in computational models
of neural networks by which neurons in a layer compete with each others for activa-
tion.They are commonly used in computational models of the brain, particularly for
distributed decision-making in the cortex. Important examples include hierarchical
models of vision [50], and models of selective attention and recognition. They are also
common in articial neural networks and neuromorphic analog VLSI circuits. It has
been formally proven that the winner-take-all operation is computationally powerful
compared to other nonlinear operations, such as thresholding [51].
The human vision-processing system is built of numerous complex neural layers
that communicate with one another by means of feedforward and feedback neural
connections. Via these connections, each neuron frequently makes signals to others at
intro-layer or inter-layer locations by broadcasting electrical streams of pulses. Every
time a neuron generates a pulse, its addressing information is sensed by a neural
junction called synapse, which is temporally connected to a centric sensory line (also
known as the bus), where many other neurons are simultaneously competing for the
right of way in order to travel further. In such a competition, the general rule is: The
recipient neuron at the end of the bus will only listen to neurons that are active when
it is active (i.e., the winners are those who have stronger and more consistent signal
intensity), and ignore the rest.
53
A winner-takes-all (WTA) circuit, which identies the highest signal intensity
among multiple inputs, is arguably the most important building block seen in various
neural networks, fuzzy control systems, and increasingly often, in integrated image
sensors and neuromorphic vision chips that aim to emulate or even outperform; al-
though widely regarded with suspicion-the extremely optic-sensitive coat of the pos-
terior part of the human eye that receives the image produced by the lens; namely,
the retina. Once the neuron (also referred to as the cell) with the highest input signal
is successfully selected by the WTA circuit, a certain value will be assigned to that
winning cell by means of current or voltage, while all other cells nominal values will
be set to null (i.e., they lose).
Many WTA circuit implementations have been proposed in the literature [52] [43]
[42] [53] [54] [55] [56]. The MOS implementation of the WTA function was rst intro-
duced by Lazzaro et al. [52]. This very compact circuit optimizes power consumption
and silicon area usage. It is asynchronous, processes all input currents in parallel and
provides output voltages in real time. The rst true current-mode (CM) WTA circuit,
producing an output current that is proportional to the value of the winning current,
was introduced by Andreou et al. [43] and Boahen et al. [42]. In 1993, the use of
positive feedback to improve the performance of a CM WTA system was reported by
Pouliquen et al. [53]. Several modications to Lazzaros design have been suggested
in the past [54] [55] [56]. The circuit has been modied by Starzyk and Fang [54]
by improving precision and speed performance. In 1995, DeWeerth and Morris [55]
have added distributed hysteresis using a resistive network. Distributed hysteresis
allows the winning input to shift between adjacent locations maintaining its winning
status, without having to reset the network. Additional modications that endow the
Lazzaros WTA with hysteretic and lateral inhibition and excitation properties have
been proposed by Indiveri [56].Other recent good implementations of WTA are of
Fish et.al. [57].They have made the circuit to work in strong as well as subthreshold
modes.Recent implementation have been done in subthreshold by Rahman et.al [58]
but they have not reported the power dissipated in the circuit. A most recent im-
plementation by D.Moro et.al. [59] has been done in the strong inversion mode. We
have implemented and optimized the same circuit in the subthreshold mode and we
54
found exciting results in terms of voltage supply, power dissipation and resolution.
5.2 Current Mode WTA Circuits
Figure 5.1: Current Mode WTA Neural network
In current mode approach to WTA neural network shown in Figure.5.1 the k
th
output current winner selection is based on criterion of maximum activation among
all m neurons participating in a competition. Weights of the winning neuron with
the largest i
OUTk
are adjusted, while the weights of the others remain unaected. As
shown above the CM WTA implements the max function.
i
OUTk
=
max
i=1,2,..m
n

j=1
w
ij
i
INj
(5.1)
Thus for the eective implementation of WTA circuit we will use the Current Con-
veyor Circuits described in the Section.2.4. First we will see the working principle of
the current mode WTA by Lazzaro et.al [52] which is the basic and simplest WTA
circuit.
5.2.1 Lazzaros WTA Circuit Principle
Figure.5.2 shows the schematic diagram of the the Lazzaros Winner Take All
Circuit. It has 3 cells. A single wire associated withe the voltage potential V
c
,
computes the inhibition for the entire circuit. To apply this inhibition locally, each
cell responds to the common wire voltage V
c
, using transistor M
i1
. This computation
55
is continous in time; no clocks are needed. The output representation of the circuit
is not binary; The winning output encodes the logarithm of its associated input.
Figure 5.2: Schematic Diagram of 3 cells of the Lazzaros WTA Circuit
In order to understand the working behavior of the circuit shown in Figure.5.2,
let us consider the the condition where for the two cell circuit(ignore the third cell
shown), wherein the inputs are equal i.e. I
in1
= I
in1
= I
m
. Transistors M
11
and M
12
have identical potentials at gate and sources, and are both sinking the same current
I
m
; thus the drain potentials V
1
and V
2
must be equal in magnitude. Therefore
the transistors M
11
and M
12
must sink similiar current of I
c1
= I
c2
=
I
c
2
. In the
subthreshold region, the equation I
m
= I
o
exp(V
c
/V
o
) describes M
11
and M
12
, where
I
o
is the fabrication parameter, V
o
= kT/q. Similarly
I
c
2
= I
o
exp((V
m
V
c
)/V
o
),
where V
m
V
1
= V
2
, describes the transistors M
11
and M
12
. Solving for V
m
(I
m
, I
c
)
yeilds..
V
m
= V
o
ln
_
I
m
I
o
_
+ V
o
ln
_
I
c
2I
o
_
(5.2)
Thus for the equal input currents, the circuit produces equal output voltages. The
output voltage V
m
logarithmically encodes the magnitude of the input current I
m
.
The input condition I
in1
= I
m
+
i
, I
in2
= I
m
illustrates the inhibitory action of
the circuit. Transistor M
11
must sink
i
more current than in the previous example;
as a result the gate voltage of M
11
rises. Transistors M
11
and M
12
share a common
gate, however; thus, M
12
must also sink I
m
+
i
. But only I
m
is present at the drain
56
of M
12
. To compensate, the drain voltage of M
12
, V
2
must decrease. For small
i
s,
the Early Eect serves to decrease the current through M12, decreasing V
2
linearly
with
i
. For large
i
s, M
12
must leave saturation, driving V
2
to approximately OV .
As desired, the output associated with the smaller input diminishes. For large
i
s,
I
c2
0, and I
c1
I
c
. The equation I
m
+
i
= I
o
exp(V
c
/V
o
) describes the transistor
M
11
, and the equation I
c
= I
o
exp((V
1
V
c
)/V
o
) describes transistor M
21
. Solving for
V
1
yields
V
1
= V
o
ln
_
I
m
+
i
I
o
_
+V
o
ln
_
I
c
I
o
_
(5.3)
The winning output encodes the logarithm of the associated input. The symmet-
rical circuit topology ensures similar behavior for the increase in I
in2
relative to I
in1
.
The resistance seen at node V
c
is approximately:
R
o,i

1
g
m,i1
r
o,i2
g
m,i2
(5.4)
where r
o,i2
is the drain source resistance of the transistor M
i2
g
m,i1
, g
m,i2
are the transconductances of the transistors M
11
and M
12
, respectively.
5.2.2 Novel Implementation of CM WTA
The circuit proposed by D.Moro-Frias et.al [59] has been implemented here
in the subthreshold region of the MOS transistors. The complete implemented cir-
cuit diagram has been shown in Figure.5.3. It consists of n identical cells (n=3 in
Figure.5.3), each with three transistors: M
i1
, M
i2
and M
i3
and a DC bias current
I
Bi
(i = 1, , n). The cells are connected together at the low-impedance commonnode
V
c
to a DC sink current source named I
c
.
The additional transistor M
i3
reduces the resistance seen at node V
c
through neg-
ative feedback. In this way, the speed of the topology is improved without impacting
the cell gain. Note that transistors M
i2
and M
i3
constitute what is called a super
source follower.In contrast to the Lazzaros circuit shown in Figure.5.2 and impedance
given by Eq.5.4, the impedance seen at node V
c
is given by:
R
o,i

1
g
m,i1
r
o,i1
g
m,i2
r
o,i2
g
m,i3
(5.5)
57
F
i
g
u
r
e
5
.
3
:
S
c
h
e
m
a
t
i
c
D
i
a
g
r
a
m
o
f
t
h
e
C
u
r
r
e
n
t
M
o
d
e
W
T
A
c
i
r
c
u
i
t
i
n
s
u
b
t
h
r
e
s
h
o
l
d
m
o
d
e
o
f
o
p
e
r
a
t
i
o
n
.
58
Thus, the resistance has been reduced by a factor of about g
m,i3
r
o,i1
. By design, all
the bias currents of the implemented circuit topology are equal, I
b1
= I
b2
= I
b3
= I
b
,
and I
c
> 3I
b
.
In order to understand the operation principle, consider rst the case in which all
the current inputs are equal: I
in1
= I
in2
= I
in3
= I. In this case M
i2
transistors sink
I
b
each one, whereas M
i3
transistors sink the same current (I
c
3I
b
)/3.
When the input condition changes to I
in1
= I + I and I
in2
= I
in3
= I, M
11
sinks an extra current equal to I, incrementing the voltage at node V
1
and therefore
incrementing the voltage at the common node V
c
. Now M
21
and M
31
must also sink
I +I but I
in2
and I
in3
are just I, so the drain voltages of these transistors decrease
in order to compensate for the increase in V
c
. For large values of I, M
21
and M
31
must leave saturation, driving V
2
and V
3
to approximately 0V . As desired, the output
associated with the smaller input diminishes. Now I
c
ows only through the winner
cell, so a current I
c
I
b
ows through M
13
.
In order to get a copy of the winning current, M
out
is connected to node V
c
. In
this way, the gate to source voltage of M
o
ut is set to the same gate to source voltage
as the M
i1
transistors and drains a current equal to the winning one.
5.2.3 Simulation Results of WTA circuit
The WTA shown in Figure.5.3 was designed in 180nm process technology in
CADENCE. For any large scale system, resolution, supply voltage and power con-
sumption are the parameters used for the characterization [60].The dimensions of the
transistors, voltage supply, and the currents have been listed in the Table5.1.
Transient Response of WTA
Figure.5.4 shows the Transient analysis for the sinusoidal input currents of 20nA
peak-peak at the frequencies of 1MHz, 2MHz and 5MHz for I
in1
, I
in2
and I
in3
respectively. Since the circuit is Winner Take All, as expected the output current
follows the envelope of the input currents.
59
Table 5.1: Dimensions of the Transistors,Supply Voltage and Currents used for the
Subthreshold Operation of the WTA circuit.
Parameter Value
V
dd
0.8V
I
bi
20nA
I
c
80nA
L 2m
W
Mi1
,W
Mi2
625nm
W
Mi3
1.25m
Figure 5.4: Transient Response of the subthreshold WTA circuit where I
out
is following
the envelope of input currents
Resolution Measurement of WTA
For resolution measurements, the input currents for the rst and third cell
were I
in1
= 10nA and I
in2
= 1nA. The input current for the third cell, I
in3
, was
incremented from 0 to 40nA. When the value of I
in3
is lower than 10nA, the rst
cell wins, setting a voltage proportional to the value of I
in1
at node V
c
and, as a
consequence, draining all the tail current of the WTA. When I
in3
is greater than I
in1
,
the third cell wins, so the voltage at node V
c
is proportional to I
in3
. Ideally, when I
in3
60
becomes greater than I
in1
, the rst cell instantly turns o. However, during transition
both the rst and third cells are active. As the value of I
in3
gets closer to I
in1
, the
third cell gradually turns on, drawing a fraction of I
c
. As I
in3
increases, the third
cell draws more and more current until I
in3
becomes greater than I
in1
by a certain
quantity and the rst cell turns o. So, the value of I
in1
at which the rst cell turns
o indicates the resolution of the whole WTA [60]. In Figure.5.5 the DC response to
this test is shown for the proposed WTA. As shown, voltage V
1
decreases whereas V
3
increases as I
in3
increases.
Figure 5.5: DC Response of the subthreshold WTA circuit
The resolution was measured at 20% of the V

1
s nal value(when I
in3
= 40nA).
The nal value of V
1
was measured to be 9.87mV , so 20% of that nal value is equal
to 1.974mV and thus at that value I
in3
was found out to be 600pA. Therefore the
resolution of the WTA simulated circuit is 600pA.
Power Consumption
The power consumed by the circuit was calculated by the CADENCE software
and was found to be approximately equal to 52.3m.
61
Comparison with other CM WTA
The Table.5.2 compares the available current mode WTA implementations in
the literature with the simulated subthreshold WTA circuit. The transistors used in
our simulation are only 3 as we see from the below table the minimum is 3. Also
Vdd required is very less compared with the others. The Power consumed is also
very low. Resolution is also good. Overall the performance of the ciruit simulated in
subthreshold is the best.
Table 5.2: Performance Characteristics Comparison with other WTA circuits
Parameter [61] [62] [58] [63] [64] [59] This Ckt
Input I I I I I I I
Output I I V I V I I
Vdd 2.5V 1.2V 0.7V 5V 3.3V 2.5V 0.8V
Trans/Cell 3 3 5 4 12 3 3
Resolution 1.06A 3.9A - 0.5A - 1.55A 0.6nA
Power 203.3W 133.9W - - 87.5W 281.7W 52.3W
Technology 0.13m 0.13m 90nm 2m 0.35m 0.13m 180nm
62
Chapter 6
Conclusions and Future Work
6.1 Conclusions
In this dissertation, novel implementation of the low voltage and low power neu-
ron and WTA structures is presented. All the circuits are designed in current mode
and using translinear principle. The Neuron design was divided into synapse design
and activation function design. For the synapse design,four quadrant multiplier was
designed and simulated using the Translinear principle in the subthreshold region of
MOSFETs. Also subthreshold mode of MOS was used for the design of Transcon-
ductance Dierential Amplier for the activation function.Thus overall the circuit
for the neuron consumes very less power and requires low voltage for the operation
making parallel processing of large number of neuron on a single chip a reality.
Further; another most important building block of neuromorphic circuits is WTA,
which selects the winner and outputs the same. A novel implementation has been
done in 180nm process technology for subthreshold MOS requiring low voltage.here
also the translinear principle has been used. Analog circuits of such kind in 180nm is
almost null in the technical literature and according to our knowledge this is the rst
ever attempt to do so. Both the circuits of neuron and WTA requires low voltage for
operation and low power is consumed by the circuits.
63
6.2 Future Work
Any work is never nished. If it is stopped or halted then it is destroyed. So
we present here some of the future work that can be done to ne tune and utilize this
product of this work.
6.2.1 Array/Layers of Neurons
Using the designed neuron in this work, one can make an array of the neuron
and measure the power consumed or dissipated. This can be done in simulations
as well as on fabricated IC. Further the layers of neurons should be arranged as in
Articial Neural Network and apply the XOR principle to check the functioning of
the neurons. This concept can be extended to any extent since the power consumed
by the single neuron is estimated to be in nW.
6.2.2 Emulating Human Vision
The designed neuron along with the WTA or some spiking neuron circuit along with
this WTA can be utilized to emulate the human vision. Since the WTA design which
works in subthreshold and number of transistors required is also less; so a large input
WTA can be easily built and use in the human vision mimicking on silicon. This is
a good topic of research.
6.2.3 Layout of the designed circuits
These designed circuits should be evaluated for the mismatch in the devices and post
layout simulations be done in order to verify the power consumption and voltage
required and area occupied by these circuits. Since we have used the CADENCE
software for the simulations, so there is less probability of huge error in the circuits
designed and simulated. However they should be done before going for fabrication.
64
Bibliography
[1] C. Mead, Analog VLSI and Neural Systems. Reading,MA: Addison-Wesley Pub-
lishing Company, 1989.
[2] J. W. Nauta and M. Feirtag, The Organization of the Brain, Scientic Amer-
ican, 1979.
[3] J. Dayho, Neural Network Architectures: An Introduction. New York: Van
Nostrand Reinhold, 1990.
[4] DARPA Neural Network Study. Fairfax, VA: AFCEA Press, 1988.
[5] D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing. Cam-
bridge: MIT Press, 1986.
[6] J. Hertz, A. Krogh, and R. G. Palmer, Introduction to the Theory of the Neural
Computation. Addison-Wesley Publishing Company, 1981.
[7] R. P. Lippmann, An Introduction to Computing with Neural Nets, IEEE ASSP
Magazine, vol. 4, no. 2, pp. 422, 1987.
[8] W. S. McCulloch and W. H. Pitts, A Logical Calculus of the Ideas Imminent
in Nervous Activity, Bull.Math.Biophy, vol. 5, pp. 113133, 1943.
[9] D. O. Hebb, The Organization of Behavior, A Neuropsychological Theory. New
York: John Wiley, 1949.
[10] B. . Widrow and M. . Ho, Adaptive Switching Circuits, IRE Western Electric
Show and Convention Record, vol. 4, pp. 96104, 1960.
[11] B. Widrow, Self Organizing Systems. Washington, DC: Spartan Books, 1962.
65
[12] M. Minsky and S. Papert, Perceptrons. Cambridge, MA: MIT Press, 1969.
[13] S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning Percep-
tron, Development, Cognition, and Motor Control. Boston, MA: Reidel Press,
1982.
[14] J.J.Hopeld, Nural Networks and Physical Systems with Emergent Collective
Computational Abilities, Proc. of National Academy of Science, vol. 2, pp. 191
196, 1982.
[15] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, A Learning Algorithm for
Boltzmann Machines, Cognitive Science, vol. 9, pp. 147169, 1985.
[16] P. J. Werbos, Beyond Regression: New Tools for Prediction and Analysis in
Behavioral Sciences. PhD thesis, Harvard University, Cambridge, MA, 1974.
[17] E. M. Izhikevich, Simple Model of Spiking Neurons, IEEE Transactions on
Neural Networks, vol. 14, no. 6, pp. 15691572, 2003.
[18] E. M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry of Ex-
citability and Bursting. The MIT press, 2007.
[19] S. Mihalas and N. Niebur, A Generalized Linear Integrate-And-Fire Neu-
ral Model Produces Diverse Spiking Behaviors, Neural Computation, vol. 21,
pp. 704718, 2009.
[20] E. C. Mead, M. Ismail, Analog VLSI Implementation of Neural Systems.
Boston,MA: Kluwer Academic Publishers, 1989.
[21] E. U. Ramacher, U. Ruckert, VLSI Design of Neural Networks. Boston,MA:
Kluwer Academic Publishers, 1991.
[22] G. Inidiveri, A Low Power Adaptive Integrate-and-Fire Neuron Circuit, IEEE
ISCAS, pp. 820823, 2003.
[23] P. Livi and G. Indiveri, A Current-Mode Conductance-based Silicon Neuron for
Address-Event Neuromorphic Systems, IEEE ISCAS, pp. 28982901, 2009.
66
[24] J. H. B. Wijekoon and P. Dudek, Integrated Circuit Implementation of a Cor-
tical Neuron, IEEE ISCAS, pp. 17841787, 2008.
[25] F. Folowosele and et. al, A Switched Capacitor Implementation of the Gener-
alized Linear Integrate-And-Fire Neuron, IEEE ISCAS, pp. 21492152, 2009.
[26] V. Rangan and et.al., A Subthreshold aVLSI Implementation of the Izhike-
vich Simple Neuron Model, 32nd Annual International Conference of the IEEE
EMBS, vol. 32, pp. 41644167, 2010.
[27] A.Samil and et al., Low Power VLSI Implementation of the Izhikevich Neuron
Model, pp. 169172, 2011.
[28] S. M. M.S.Ali and S.S.Gajre, Simple Neuron Circuit with Adjustable Weights
and its Application to Neural Based ADC, International Conference on Emerg-
ing Technological Trends in Advanced Engineering Research [ICETT 2012],
vol. 2, pp. 15, 20-21 Feb 2012.
[29] E. A. Vittoz, Analog VLSI Signal Processing: Why,Where, and How?, Journal
of VLSI Signal Processing, vol. 8, pp. 2744, 1994.
[30] S.Luryi, Xu.J., and Zaslavski.A, Future Trends in Microelectronics- The Road
Ahead. Wiley-Interscience, 1999.
[31] L. R. F. and Mead.C, An Analog Electronic Cochlea, IEEE Transactions on
Acoustics, Speech and Signal Processing, vol. 36, no. 7, pp. 11191134, 1988.
[32] L.J.Stotts, Introduction to Implantable Biomedical IC Design, IEEE Circuits
Devices Mag, vol. 5, no. 1, pp. 1218, 1989.
[33] Baltes.H and B. O, CMOS Integrated Microsystems and Nanosystems, Pro-
ceedings of the SPIE Conference on Smart Electronics and MEMS, vol. 36, no. 73,
pp. 210, 1999.
[34] The National Technology Roadmap for Semiconductors. Technology Needs,
tech. rep., Semiconductor Industry Association, 2000.
67
[35] C. Lin and M. Ismail, Robust Design of LV/LP Low-distortion CMOS Rail-
to-Rail Input Stages, Analog Integrated Circuits and Signal Processing, vol. 21,
pp. 15361, 1999.
[36] G. R. and T. G. C., Analog MOS Integrated Circuits for Signal Processing. Wiley-
Interscience, 1986.
[37] W. J. and C. K., MOS Charge Pumps for Low-Voltage Operation, IEEE Jour-
nal of Solid State Circuits, vol. 33, pp. 5927, 1988.
[38] Y. Tsividis, Externally Linear, Time-Invariant Systems and their Application to
Companding Signal Processors, IEEE Transactions on Circuits and Systems-II,
vol. 44, pp. 6568, 1997.
[39] Degrauwe, M. G. R. J., V. E. A., and D. H. J., Adaptive Biasing CMOS Am-
pliers, IEEE Journal of Solid State Circuits, vol. 17, pp. 522528, 1982.
[40] V. E. and F. J., CMOS Analog Integrated Circuits based on Weak Inversion
Operation, IEEE Journal of Solid State Circuits, vol. 12, pp. 22431, 1977.
[41] K. C. Smith and A.Sedra, The Current Conveyor A New Circuit Building
Block, Proceedings of the IEEE, vol. 56, no. 8, pp. 13681369, 1968.
[42] K. A. Boahen, A. G. Andreou, P. O. Pouliquen, and R. E. Jenkins, Current-
Mode based Analog Circuits for Synthetic Neural Systems, U.S. Patent 5 206
541, 27.April 1993.
[43] A. G. Andreou, K. A. Boahen, A. Pavasovic, P. O. Pouliquen, R. E. Jenkins,
and K. Strohbehn, Current-Mode Subthreshold MOS Circuits for Analog VLSI
Neural Systems, IEEE Trans. Neural Netw, vol. 2, no. 2, pp. 205213, 1991.
[44] B.Gilbert, Translinear Circuits: A Proposed Classication, Electronics Letters,
vol. 11, no. 1, pp. 1416, 1975.
[45] A. G. Andreou and K. A. Boahen, A 48,000 Pixel, 590,000 Transistor Silicon
Retina in Current-Mode Subthreshold CMOS, Proceedings of the 37th Midwest
Symposium on Circuits and Systems, pp. 97102, August 1994.
68
[46] G. Han and E. Sanchez-Sinencio, CMOS Transconductance Multpliers: a Tu-
torial, IEEE Trans. on Circuits and Systems II: Analog and Digital Signal
Processing, vol. 45, no. 12, pp. 15501563, 1998.
[47] B.Razavi, Design of Analog Integrated Circuits. McGraw Hill, 2001.
[48] F. Diotalevi and M. Valle, An analog CMOS Four Quadrant Current-mode
Multiplier for Low Power Articial Neural Networks Implementation, EC-
CTD01,Helsinki,Finland, pp. III325III 328 156, 28-31 August 2001.
[49] D. M.Binkley, Tradeos and Optimization in Analog CMOS Design. The Atrium,
Southern Gate, Chichester, West Sussex PO19 8SQ, England: John Wiley &
Sons Ltd, 2008.
[50] M. Riesenhuber and T. Poggio, Hierarchical Models of Object Recognition in
Cortex, Nature Neuroscience, pp. 211, 1999.
[51] W.Maass, On the Computational Power of Winner-Take-All, Neural Compu-
tation, 2000.
[52] J. Lazzaro, S. Ryckebusch, M. A. Mahowald, and C. A. Mead, Winner-Take-
All Networks of O(n) Complexity, tech. rep., California Institute of Technol-
ogy,CALTECH, Pasadena,California 91125, 1988.
[53] P. O. Pouliquen, A. G. Andreou, K. Strohbehn, and R. E. Jenkins, An Associa-
tive Memory Integrated System for Character Recognition, Proc. 36th Midwest
Symp. Circuits Systems, pp. 762765, August 1993.
[54] J. A. Startzyk and X. Fang, CMOS Current-Mode Winner-Take-All Circuit
with both Excitatory and Inhibitory Feedback, Electron. Lett, vol. 29, no. 10,
pp. 908910, 1993.
[55] S. P. DeWeerth and T. G. Morris, CMOS Current-Mode Winner-Take-All Cir-
cuit with Distributed Hysteresis, Electron. Lett, vol. 31, no. 13, pp. 10511053,
1995.
69
[56] G.Indiveri, A Current-Mode Hysteretic Winner-Take-All Network, with Exci-
tatory and Inhibitory Coupling, Analog Integr. Circuits Signal Process., vol. 28,
pp. 279291, 2001.
[57] A. Fish, V. Milrud, and O. Yadid-Pecht, High-Speed and High-Precision Cur-
rent Winner-Take-all Circuit, IEEE Transactions on Circuits and Systems II:
Express Briefs, vol. 52, no. 3, pp. 131135, 2005.
[58] M. Rahman, K. Baishnab, and F. Talukdar, A Novel High Precision Low Power
Current Mode CMOS Winner-Take-All Circuit, Int.J.Engineering Science and
Technology, vol. 2, no. 5, pp. 13841390, 2010.
[59] D.Moro-Frias, M.T.Sanz-Pascual, and c.A.de la Cruz Blas, A Novel Current-
Mode Winner-Take-All Topology, European Conference on Circuit Theory and
Design(ECCTD), no. 20, pp. 134137, 2011.
[60] Z. S. Gnay and E. S. Sinencio, CMOS Winner-Take-All Circtuis: A Detail
Comparison, Proceedings of 1997 IEEE International Symposium on Circuits
and Systems ISCAS 97, vol. 1, pp. 4144, 1997.
[61] B. Sekerkiran and U. Cilingiroglu, Improving the Resolution of Lazzaro
Winner-Take-All Circuit, International Conference on Neural Networks, vol. 2,
pp. 10051008, 1997.
[62] S. Hemati and A. H. Banihashemi, A Current Mode Maximum Winner-Take-
All Circuit with Low Voltage Requirement for Min-Sum Analog Iterative De-
coders, Proceedings of the 2003 10th IEEE International Conference on Elec-
tronics, Circuits and Systems, 2003. ICECS 2003, vol. 1, pp. 47, 2003.
[63] J. A. Startzyk and X. Fang, CMOS Current-Mode Winner-Take-All Circuit
with both Excitatory and Inhibitory Feedback, Electronic Letters, vol. 29,
no. 10, pp. 908910, 1993.
[64] A. Fish, V. Milrud, and O. Yadid-Pechit, High Speed and High Precision Cur-
rent Winner-Take All Circuit, IEEE transactions on Circuits and Systems-II,
vol. 52, no. 3, pp. 131135, 2005.
70

Das könnte Ihnen auch gefallen