Sie sind auf Seite 1von 18

JID: AMC

ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

Applied Mathematics and Computation 0 0 0 (2017) 1–18

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Analog, parallel, sorting circuit for the application in Neural


Gas learning algorithm implemented in the CMOS technology
Tomasz Talaśka a,∗, Rafał Długosz a,b
a
UTP University of Science and Technology, Faculty of Telecommunications, Computer Science and Electrical Engineering, ul. Kaliskiego 7,
85-796 Bydgoszcz, Poland
b
Delphi Automotive, ul. Podgórki Tynieckie 2, 30-399 Kraków, Poland

a r t i c l e i n f o a b s t r a c t

Article history: The paper presents a novel circuit, implemented in the CMOS technology, that allows for
Available online xxx sorting analog signals in parallel. The circuit is to be used in neural networks trained in
accordance with Neural Gas (NG) learning algorithm implemented in the CMOS technology.
Keywords:
The role of the circuit is to determine, for a given learning pattern, the winning neuron as
Sorting circuit
well as its neighbors. The proposed circuit is versatile. It can be also used, for example,
Self-Organizing Neural Networks
Neural Gas learning algorithm in nonlinear filtering of analog signals. It is capable of performing simultaneously several
Parallel signal processing typical nonlinear operations that include Min, Max and median filtering. The circuit offers
CMOS implementation high accuracy, which means that it is able to distinguish signals which differ by a relatively
Analog signals processing small value. However, the accuracy depends on the calculation time. For example, to be
able to distinguish the signals that differ by 10 nA (for maximum range of 10 μ A), the
assumed calculation time has to be set to at least 1 μ s. To improve the accuracy to 5 nA,
the calculation time has to be doubled. The circuit provides us the sorted list of signals, in
accordance with their values. This information contains both the positions of the signals on
the sorted list and their values. The first parameter is used in the NG learning algorithm.
The circuit was implemented in the TSMC 180 nm CMOS technology and verified by means
of the corner analysis in the HSpice environment. For an example case of eight inputs
varying in between 1 to 10 μ A the circuit dissipates an average power of 300 μ W, at data
rate of million sorting operations per second.
© 2017 Elsevier Inc. All rights reserved.

1. Introduction

Artificial neural networks (ANN) are versatile tools used for data processing and classification. In the literature one can
find a wide spectrum of examples of systems in which the ANNs are used. They include: control engineering, data mining
and classification, prediction of signals, data analysis [1,2] and others. One can also observe a growing application of ANNs
in mechanics, electronics and energetics, in which such tools are used, for example, in a load forecasting in power systems.
ANNs are also more and more frequently used in medical diagnostics applications for the analysis of biomedical signals
[3,4].
In recent years one can observe a rapid development of portable devices, in which low power dissipation is in high
demand. An efficient hardware implementation of ANNs can lead to a strong reduction in energy consumption and minia-
turization of such devices.


Corresponding author.
E-mail addresses: tomasz.talaska@gmail.com, talaska@utp.edu.pl (T. Talaśka), rafal.dlugosz@gmail.com (R. Długosz).

http://dx.doi.org/10.1016/j.amc.2017.02.030
0 096-30 03/© 2017 Elsevier Inc. All rights reserved.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

2 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

ANNs can be classified according to different criteria, among which the learning algorithm is one of the most important
ones. In general, the NNs can be trained with or without the supervision. In our works we focus on the algorithms from
the second group, as such algorithms can be relatively easy implemented at the transistor level, while simultaneously being
efficient in many practical applications. In this paper we focus on the Neural Gas (NG) algorithm [5–8], providing some
modifications relevant for its efficient hardware implementation.
The literature studies show that the NG algorithm belongs to the most flexible in the group of unsupervised learning
rules [1]. It introduces a concept of a soft neighborhood between neurons, which is built ‘ad hoc’ on the basis of instanta-
neous distances of particular neurons to the winning neuron. In this aspect this algorithm substantially differs from other
algorithms from this group, e.g. the winner takes most (WTM) algorithm [9], in which the neurons have fixed connections
with other neurons in the map. In NG NNs only these neurons are considered as neighbors that are in the closest proximity
to the winning neuron in the input data space. This approach is more “natural” than the one offered by the WTM NN, in
which the neurons have no possibility to choose their neighbors.
In the literature one can find several variants of the NG learning algorithm. One of them is the Growing Neural Gas (GNG)
with some modifications, that include: Grows When Required (GNR) [10] and Incremental Growing Neural Gas (IGNG) [11].
In those algorithms some parameters that in case of the classical NG NN are constant, become variables modified during the
learning process. One of them is the number of neurons that take part in the competition at a certain stage of the learning
process.
A software realization of the NG NNs is relatively simple. In this case after the phase of the calculation of distances
between particular neurons and a given learning pattern, the neurons are sorted according to the distance values, using
typical sorting algorithms. After the sorting operation the winner and its closest neighbors are located at the top of the
sorted list. The sorting operation is fast if the number of neurons is small, however, the computation effort strongly increases
with the number of neurons. Even the realization of more sophisticated GNG algorithms is relatively simple in this case, as
new neurons can be dynamically added (or removed) to a list of neurons, which is a well known technique. In practice the
software realization of the NG algorithm is less complex than in the case of the WTM algorithm, in which the realization of
the fixed neighborhood is usually more difficult [12].
An opposite situation occurs in hardware realization, in which the implementation of the NG algorithms is more com-
plex than in case of the WTM one. This is due to the sorting operation which becomes challenging if the number of neu-
rons is large. In GNG algorithms an additional problem is an efficient realization of a mechanism that allows for increas-
ing/decreasing (activation/deactivation) the number of neurons on the basis of, for example, the course of the quantization
error, Qerr . Calculation of the Qerr in hardware is the problem itself.
In the literature one can find various parallel sorting circuits, in which the computation time only slightly depends
on the number of the input signals [13–16]. In these cases, however, other problems appear, such as the precision of the
sorting operation and the ability to distinguish between signals whose values differ from each other slightly. Additionally,
the number of the inputs has usually an impact on the precision [13]. Another problem is the hardware complexity, which
usually strongly increases with the number of the input signals. As a reference to the state-of-the art several example
solutions can be considered [14–16]. In [14] the sorting circuit requires a large number of external controlling clock signals.
The number of these signals is equal to the number of the sorted signals. For large numbers of neurons, the hardware
complexity of such circuit will become very large, which makes it not suitable for NNs implemented in hardware. One of
the problems in this case is a complex multi-phase clock generator that has to be used to control the circuit. The circuits
reported in [15,16] determine the sorted values of particular analog signals, however, the information about the position of
particular signals on the sorted list is not available. For this reason, this circuit is not useful for the application in NNs.
The analysis of the state-of-the art solutions shows that there is still a room for new solutions in this area. For this
reason, in this paper we focus on a development of a novel sorting circuit suitable for such NNs realized at the transistor
level. Other components used in this NN are similar to those which we have used in our former projects of other NNs
trained without supervision, such as the WTM [17] and Winner takes All (WTA) NNs [18].
To make the state-of-the-art complete, we tried to find, without success, example realizations of complete NG NNs in
Field Programmable Gate Arrays (FPGAs) or in Graphics Processing Units (GPUs). One of the possible reasons that such so-
lutions do not exist is the difficulty in the realization of the sorting as well as the neighborhood mechanisms, which in case
of the parallel NNs imposes a lot of challenge. We also did not find any hardware implementation of the GNG algorithms.
Due to various limitations imposed by hardware only relatively simple learning algorithms are implemented in this way,
additionally with various simplifications. We have an idea how to implement the mechanism that activates/deactivates the
neurons in the GNG learning algorithm, however in this case various limitations have to be taken into account. One of the
main problems is the maximum number of neurons that have to be used, which is not known in advance, as it depends on
the structure of dataset. One possibility is to put into a chip the number of neurons, which seems to be sufficient for a wide
range of applications. However, in many cases such number of neurons will never be used which makes such a solution not
optimal.
The comparison of transistor-level realizations with FPGA or GPU implementations is not straightforward, as in particular
cases different aspects and parameters are considered as the most important. In the FPGA/GPU approach the paramount
features include high data rate and the possibility to reprogram the system, even at the expense of relatively high power
dissipation. Such solutions are suitable for short series, due to relatively high price of a single unit. In case of the implemen-
tation of the NN as application specific integrated circuit (ASIC) the main effort is placed on reducing the power dissipation

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 3

and the chip area [17–20]. Such solutions are less expensive in case of longer series. In practice, various optimization tech-
niques are used in each case.
Since we did not encounter examples of hardware implementation of complete NG NNs, we briefly discuss the parame-
ters of example SOMs realized in the FPGA. In the literature one can find several realizations of this type [21–23]. The WTM
algorithm to some extent is similar to the NG one, as most of the building blocks are the same in both cases. The most visi-
ble difference is in the structure of the neighborhood mechanism, as described above. After completing the learning process
in both cases the neighborhood shrinks to a single neuron only, so both NNs operate in the same way.
As mentioned above, in case of the FPGA implementation the main effort is put on performance, while the power dis-
sipation is less important. It is visible, for example, in WTM SOM realization reported in [21]. In that cases the achieved
performance is two orders of magnitude worse than the performance of similar NNs realized as an ASIC [17]. A parameter
that can be used to compare FPGA realizations with full-custom approach is the number of connection updates per second
(CUPS). A figure-of-merit (FOM) can be defined as CUPS over the power dissipation ([CUPS/W]) that is equivalent to con-
nection update over the consumed energy ([CU/J]). In [21], the NN with 25 neurons and 23 inputs, at data rate of 10.86 kS/s
dissipates the power of 32 mW. The SOM achieves in this case 6.25 MCUPS (FOM=192e6 [CU/J]). For the comparison, the
circuit presented in [17] allows for achieving the computation power of 1920 MCUS, with the FOM equal to 10e9 [CU/J].
In this paper we propose a parallel, hardware-oriented, sorting algorithm that does not require any multi-phase con-
trolling clock generator. It provides the values of particular sorted signals, as well as their positions in the sorted list. The
solution is suitable for soft winner selection circuit (SWSC) that quickly determines the winning neuron and its closest
neighbors. Both the winning neuron and its neighboring neurons are allowed to adapt their weights with different intensi-
ties that depend on the position of particular neurons on the sorted list.
The analog part of the proposed circuit operates in the current mode, in contrary to most of similar circuits reported in
the literature that operate in the voltage mode. We decided to use the current mode, as the circuit will directly cooperate
with other components of the NN, already designed in this mode. However, the circuit can also be used for sorting the
voltage signals after an addition of voltage-to-current converters at its inputs (resistors are sufficient). To make the state-
of-the art study complete we briefly refer to digital sorting circuits [24,25]. These circuits are in practice suitable only for
fully digital NNs, as in case of analog NNs they would require analog-to-digital and digital-to-analog converters. This is not
practical, especially if the number of the sorted signals is large.
The proposed circuit can also be used, for example, as nonlinear filters that perform the Min, Max and the Median
filtering. Filters of this type are frequently used in image processing to improve the quality of images, for example, for
noise removal. Such filters can be joined in a chain in order to perform more complex tasks, that include morphological
opening and closing operations, commonly used in image processing in reconstructing digital images affected by the noise.
The median filtering relies on sorting samples from a given dataset and selecting a central value from the sorted list. Since
in the proposed solution all signals are sorted, therefore we can easily obtain the Min, the Max and the Median values in
parallel.
The paper is organized as follows. In next section we briefly present the Neural Gas learning algorithm. Since the NG
learning algorithm in 80% operates as the WTA algorithm, realized by us earlier, we present selected details of particular
blocks of the former project. In Sections 3 and 4 we present transistor level implementations as well as the HSPICE simula-
tions of the proposed sorting circuit. In Section 5 we present a software model of a complete NG NN that enables modeling
some limitation that appear in hardware implementation. The conclusions are drawn in Section 6.

2. An overview of the Neural Gas learning algorithm realized in hardware

The NG NN belongs to the group of networks trained without the supervision [5]. A typical learning process in such
networks is illustrated schematically in Fig. 1. The learning process starts with generating usually random weights for all

Fig. 1. Illustration of the learning process of the NG NN.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

4 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 2. A general block diagram of Self-Organizing Neural Network based on Neural Gas Algorithm.

neurons. This phase is called the initialization. The learning phase that starts in next stage is divided into epochs. During
each epoch all patterns, X, from a given input dataset are presented to the network in a random fashion. We denote a
presentation of a single pattern X from a given dataset and subsequent calculations associated with this pattern as a cycle.
In particular cycles the NN calculates a distance, d, between the provided pattern X and vectors W in all neurons in the
NN. In the next step the sorting operation allows to determine which neurons are located in the closest proximity of a given
pattern X and those neurons are allowed to adapt their weights.
In each cycle the calculated distances of all neurons in the NN to a given pattern X are sorted in the following order:
d0 < d1 < d2 < d3 < · · · < dn−1 , (1)
 
where dk = x − wk(i )  is a distance of an ith neuron that on the sorted list is located on a kth place. The d0 is reserved
for the winning neuron. The adaptation process of all neurons, i, that belong to the neighborhood of the winning neuron is
performed in accordance with the formula:
Wi (t + 1 ) = Wi (t ) + η · G(i, x ) · (X (t ) − Wi (t ) ), (2)
where η is the learning rate, G(i, x) is a neighborhood function, Wi (t + 1 ) is the weight vector of the ith neuron after
completing the adaptation process.
In the NG learning algorithm the Gaussian function is usually used as a neighborhood function (NF). It is defined as
follows:

exp(− k·mλ(i) ) for λ > m(i )
G(i, x ) = (3)
0 otherwise
where k is a gain, m(i) is the position of a given neuron in the sorted list, while the λ parameter can be understood as a
neighborhood radius. The winning neuron opens the sorted list with m(i ) = 0. Its closest neighbor is located on the next
position with m(i ) = 1, etc. The neighborhood shrinks during the learning process (λ → 1). For λ = 1 only the winning
neuron is the subject of the adaptation, which means that the NG algorithm reduces its performance to the WTA learning
rule. For λ > 1 the neurons that belong to the neighborhood are adapted with the intensity determined by the neighborhood
function G(i, x). Other neurons in the NN remain unchanged.
Due to a large hardware complexity in case of using the Gaussian NF, in our investigations we consider also other NFs,
for example a much simpler triangular function, proposed in [26].
In practice, if we omit the sorting operation, the WTA and the NG NNs operate in almost the same way. In case of the
WTA algorithm only the winning neuron is subject of the adaptation. As a result, the WTA NN operates as the NG NN for
λ = 1. Other neurons remain inactive in this case. This is important for us, as in our former works we already proposed,
implemented and tested remaining components of the NN. The only block that needs to be substituted with a new one is
the WSC (winner selection circuit).

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 5

Fig. 3. Block diagram of the proposed sorting circuit.

Fig. 4. Block of the comparators used to convert the input currents to delays seen at the outputs of the comparators. In real design the cascode CMs are
used, here omitted for the simplicity.

In the NG algorithm the neighborhood is determined on the basis of instantaneous distances between particular neurons.
Even if the total number of neurons in the NN is large, the neighborhood usually embraces only a small amount of them.
The proposed circuit takes advantage of that fact. Since it determines the sorted list of neurons in parallel, it allows to
obtain the information about first, λ, neurons without sorting the remaining ones. This is one of the main advantages of the
proposed solution. A different situation appears in software realizations, in which all neurons have to be sorted to be able
to indicate those with the closest distances to a given learning pattern X.

2.1. Other hardware components required to build the NG NN

A general block diagram illustrating the NG learning algorithm is shown in Fig. 2. Excluding the sorting circuit proposed
in this paper, all remaining blocks have been proposed by us earlier, and used in our former projects of the WTA NNs

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

6 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 5. Digital control block (DCB) of the proposed sorting circuit.

realized in the TSMC 180 nm CMOS technology. The prototype chips containing that NN have been verified by means of
comprehensive laboratory tests [18–20,27,28]. The meaning of particular blocks shown in this diagram is as follows:

• DCC (Distance Calculation Circuit) – the block responsible for the calculation of a distance, d, between a given pattern X
and the weight vector of a given neuron [27].
• ADM (Adaptation Mechanism) – the circuit responsible for the adaptation of the weights of particular neurons [19].
• SWSC (Soft Winner Selection Circuit) – the circuit proposed in this paper used to sort distances, d, of particular neurons.

Both the DCC and the ADM components are fully analog circuits that operate in the current model. This mode has
been selected as in this case the summation and subtraction operations can be simply realized in junctions, without using
operational amplifiers.

3. The proposed sorting circuit – soft winner selection circuit

The proposed circuit, shown in Fig. 3, consists of both the analog and the digital parts. The analog block is composed
mainly of comparators that compare particular input signals with a reference ramp signal. Particular input signals represent
distances, d, of particular neurons to a given pattern X. The order in which the outputs of particular comparators switch
from logical ‘0’ to ‘1’ is registered by the digital block.

3.1. Analog part of the proposed SWSC

The block of the comparators – a core of the analog part shown in Fig. 4 – is realized using the cascoded current mirrors.
We use such mirrors to improve the quality of the sorting operation. Each of the input currents is compared with the
current that is a separate copy of the input reference current IREF . To make it possible, we copy the input reference current
to particular comparators using transistors with equal sizes, oversized to minimize the mismatch effects. This reduces the
risk of unequal values of particular reference signals provided to the comparators, which could disturb the precision of the
circuit. The reference current is a ramp signal that in this case raises up from 0 to 10 μ A.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 7

Fig. 6. Selected simulation results illustrating the performance of the analog part of the proposed sorting circuit: (a) results for different values of the
input currents, (b) results for different periods of the IREF sawtooth reference signal.

Depending on whether the IREF current is ascending (raising up from 0 to 10 μ A) or descending (10 → 0 μ A), the system
sorts the signals on either ascending or descending order. This feature is paramount in case of the application of the circuit
in nonlinear Min/Max filtering. In the Max filter the descending ramp signal should be used, to catch the Max signal as a
first one, and additionally to negate the outputs of the comparators. Since in the NNs the MIN values are important, we use
the ascending ramp. Tests of the analog part of the proposed SWSC are presented in Section 4.1.

3.2. Digital part of the proposed circuit

Digital control block (DCB) of the proposed sorting circuit is shown in Fig. 5. It is composed of channels containing a
digital support circuit (DSC) and a shift register (SR) composed of n D-flip flops (DFF) connected in a chain. The role of the
DCB is to capture the impulses generated by particular comparators and store them in the SRs in the order in which they
appear in time. The outputs of particular comparators are being catch by latches in corresponding DSCs and provided to
the outputs of these blocks. The outputs of the DSCs are then connected with gates of particular NMOS transistors in the
wired-OR gate, as well as with D inputs of first in chain D-flip flops (DFFs) in the SRs.
The circuit operates as follows: If the output of any of the DSCs switches to ‘1’, the output of the wired-OR gate becomes
‘1’ for a while, forming a narrow impulse. This impulse switches over all DFFs in all SRs in the DCB. The input of the SR

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

8 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 7. Required time period of the reference signal vs. the minimum difference between two input currents that can be distinguished by the circuit.

that is connected with a given DSC is already ‘1’, so this value is being stored at first position in this SR. Immediately after
this phase a given input signal is blocked, while the wired-OR gate returns to its initial state, ‘0’, awaiting a new impulse
coming from other input block. After detecting the impulse, the wired-OR gate becomes ready to catch a new signal thanks
to the OR gates that generates a reset signal. This signal excludes the section that caused the activation of this particular SR.
After some period of time another DSC generates ‘1’ that, in turn, triggers the output of the wired-OR gate again. As a
result, the first position of the corresponding SR becomes ‘1’, while the ‘1’ signal in the previous SR is shifted to the 2nd
position. In general, each following impulse at the output of the wired-OR gate shifts ‘1’ signals in particular SRs by one
position, thus establishing the order on the list of sorted signals.
Each of the SR blocks is composed of n DFFs, which means that the overall DCB is able to store positions of first n signals.
The value of n has to be as large as the largest neighborhood used in the learning process.
When the ‘1’ signal appears at the output of all OR gates, the sorting operation is completed. This is signalized by the
‘1’ signal at the output of the AND gate. This signal can, for example, trigger the Global Reset (GR) signal, that resets all
registers, thus allowing to start a new sorting operation. Tests of the digital part of the proposed SWSC are presented in
Section 4.2.

4. Verification of the proposed circuit and discussion of the obtained results

Tests of the proposed sorting circuit are presented separately for the analog part in Section 4.1 and the digital part in
Section 4.2. The presented simulations include a general behavior of the circuit, and the corner analysis in which we test
the robustness of the circuit against the process, voltage and temperature variation (PVT).
On the basis of the realized circuit, in Section 4.4 we provide an estimation of the hardware complexity of the proposed
solution, as well as the power dissipation as a function of the number of neurons and the inputs of the NN.

4.1. Simulations of the analog part of the proposed SWSC

The circuit was tested for an example case of eight input signals (I1 ÷ I8 ). Selected Hspice simulation results of the circuit
designed in the TSMC 180 nm CMOS technology are shown in Fig. 6.
In diagram (a), in the period starting from 0 to 5 μ s particular input signals differ by 1 μ A, while in the period from 5 to
10 μ they differ by 100 nA. The reference current IREF is a sawtooth wave varying from 0 to 10 μ A with a period of T=5 μ s.
Fig. 6(b) presents the results for three different periods of the reference current: T = 15, 85 and 300 μ s. The input signals
vary in-between 1 μ A and 1.35 μ A with a step of 50 nA. The detection time equals 3, 15 and 500 μ s for particular cases,
respectively. This is an important feature, as the precision (detected differences between the input signals) of the circuit can
be controlled by the period of the reference signal. As can be observed the circuit operates properly for different ranges
of the input signals and different periods of the reference current. The circuit was tested for differences between the input
currents varying in-between 1 nA and 1 μ A. For smaller differences the period of the reference signal has to be larger to
assure a minimum required distance, in time domain, between two adjacent impulses generated by the comparators. If this
distance in time is too small, the digital block is unable to distinguish the impulses. This leads to the situation in which

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 9

Fig. 8. Simulations of the overall sorting circuit: (a) Input signals and the reference current, (b) output signals of particular comparators, (c) input signals
of the wired-OR gate, (d) output of the wired-OR gate.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

10 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 9. Simulations of the overall sorting circuit: Output signals of particular shift registers (SRs).

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 11

Fig. 10. Time used by the sorting circuit to process a single input signals, for an example case of: 20°C, VDD =1.8 V and TT transistor model.

several signals become registered at the same position in the sorted list. Fig. 7 illustrates a dependency between the min-
imum difference between two input currents and the required time period of the reference signal to enable distinguishing
the impulses generated by the comparators.

4.2. Simulations of the digital part of the proposed circuit

Selected simulation results of the proposed sorting circuit are shown in Figs. 8 and 9. The presented tests were carried
out for eight input signals with the values varying in-between 1 and 5 μ A (IIN1 = 1.0 μA, IIN2 = 1.1 μA, IIN3 = 1.3 μA,
IIN4 = 1.2 μA, IIN5 = 5.0 μA, IIN6 = 2.0 μA, IIN7 = 3.0 μA, IIN8 = 4.0 μA). The ascending reference signal IREF rises up from
an initial value of 0 to 6 μ A over the period from 0 to 6 μ s. Diagram (a) of Fig. 8 presents the input currents, which
are subject of sorting, as well as the reference ramp current. Diagrams (b) and (c) illustrate output signals of particular
comparators and inputs of their corresponding SRs. Diagram (d) shows the output of the wired-OR gate which controls the
SRs. Fig. 9 illustrates the states of particular SRs. The proposed sorting circuit is in the NG NN directly connected with the
Adaptation Mechanism (ADM). The position of a given neuron on the sorted list is a parameter provided to the neighborhood
function (NF) G (see Eq. (3)) which is used to calculate the strength of the adaptation process for a given neuron. The value
of NF equals 1 only for the winning neuron. For other neurons its value is monotonically decreased in accordance with the
position on the sorted list. Due to the hardware complexity we consider using triangular NF instead of typically used in this
case exponential NF. The simulation investigations show that this function is sufficient [26].

4.3. Verification of the overall circuit – the corner analysis

All simulation tests presented above were performed for typical, slow and fast transistor models (TT-Typical-Typical, SS
- Slow-Slow, FF - Fast-Fast), for supply voltage varying in the range from 1.4 to 1.8 V, and temperature varying in-between
−40 and 100°C. In each of the performed tests the circuit worked properly. The only observed impact of the varying PVT
parameters was on delay introduced by the digital part of the circuit. The analog part was designed using the cascoded
current mirrors in order to increase the precision of the overall SWSC. The delay is shown in Fig. 10. This time period is
measured starting from the moment in which the comparator detects a change at its output (VCOMP signal). It is used to
provide the VCOMP signal to the DSC (VIN signal), then the VIN to the WIRED-OR gate, then to set up the register and
finally to reset a given input of the WIRED-OR gate and to decouple the VIN signal from this input. After this sequence the
WIRED-OR gate is ready to process next events.
Fig. 11 presents the delay time for different values of the PVT parameters. The value of the delay varies from 1.65 to
3.5 ns, for (1.8 V/20°C/TT) to (1.4 V/20°C/SS) cases, respectively.

4.4. Hardware complexity of the proposed SWSC

Hardware complexity can be assessed by providing the number of transistors as a function of the number of neurons in
the SWSC. Dominant, in terms of numbers, are transistors used in digital part of the SWSC, however those transistors are
usually small. Transistors in analog part, are less numerous, but they have to be oversized to avoid the mismatch problems.
Fig. 12 presents the number of transistors, as a function of the number of neurons, with division into the analog and the
digital parts.
The analysis has been performed separately for particular components of the NG NN, that include: the initialization block
(INIT), distance calculation circuit (DCC) [27], SWSC, and the adaptation block (ADM) [19]. The analysis, shown in Fig. 13,

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

12 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 11. Time required to process a single input signal, for different values of the PVT parameters.

Fig. 12. The number of transistors used in the SWSC, as a function of the number of neurons.

covers two cases: for (a) two- and (b) ten-dimensional input data. For small number of inputs the SWSC is a dominant
block in terms of the number of neurons, however, the silicon area of this block is not so large, as the used transistor are
small. For larger number of inputs and relatively small number of neurons (up to 50) the contribution of the SWSC circuit
is moderate. It results from the fact, that the complexity of the SWSC does not depend on the number of input, in contrary
to other components.
It is worth to notice, that the circuit in the current version has been designed in such a way, to enable sorting all neurons
in the NN. For this reason, the calculations have been performed for the maximum value of the neighborhood radius, λ. In
practice, the value of λ is much smaller than the number of neurons, which allows to simplify the circuit. The value of λ
should be larger in case if the initialization block is omitted. We can omit this block as the neighborhood mechanism also
allows for the activation of the neurons.
Another issue that has to be investigated is the dependency between the power dissipation and the number of inputs
and neurons. Since the overall NG NN is not yet designed, we provide (see Fig. 14) an assessment of the power dissipation
on the basis of our former project of the WTA NN [18,19,27] and the parameters of the proposed sorting circuit. As can be

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 13

Fig. 13. The number of transistors required for the implementation of the overall NG NN, as a function of the number of neurons: (a) for 2-dimensional
inputs patterns, (b) for 10-dimensional inputs patterns.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

14 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 14. Estimated power dissipation of the overall NG as a function of the number of inputs and the number of outputs (neurons).

observed, for 10 inputs and 100 neurons (10 0 0 computational channels) the power dissipation equals 50 mW, which is one
order of magnitude less than in NN realized in FPGA [22].

5. System level investigations of the NG NN realized as software model

Comprehensive tests of the circuit at the model level are necessary prior to its implementation as a chip, as any changes
and modifications are not possible after the chip fabrication. In this section we present selected investigation results that aim
at assessment of the quality of the learning process of the NG NN performed for different values of particular parameters
related to the hardware implementation of this NN. The presented simulations have been carried out in the software model
of the NN implemented in Matlab environment. Due to the topic of the paper we put a special emphasis on the influence
of the inaccuracies of the sorting circuit realized in hardware.
One of the main problems that can be encountered is the situation in which two or more neurons are at very similar
distances to a given learning pattern, and simultaneously located in a similar area of the input data space. At the circuit
level such neurons can be registered ‘ex aequo’ at the same position in the sorted list. This may cause that such neurons
will be glued together in the further learning process, and will by always adapted in the same way. This in turn will increase
the quantization error, as the number of classes that can be distinguished will be reduced in this way. This problem occurs
due to delay introduced by the wired-OR gate.
Here we firstly present selected results that illustrate the problem. Then we present the ways how to deal with the
problem of delays in the wired-OR gate at the transistor level.

5.1. Assessment criteria of the learning process of the NG NN

The learning process of the NG NN can be assessed by the observation of how the NN reacts on particular learning
patterns X. In general, it is difficult to say what is a proper outcome of the learning process. One of the measures indicating
that the learning process proceeds in a right direction are decreasing (in following learning epochs) distances between the
X signals and the weights W of the winning neurons. One can assume that the learning process is fully satisfactorily, if
the NN makes a correct quantization of the area occupied by the learning dataset. In SOMs this process is called vector
quantization, while the corresponding error is called the quantization error or a distortion measure [9,28]. This error is
defined as follows:
1  
z
Qerr = · X (t ) − W j (t )2 , (4)
z
t=1

where z is the number of all learning patterns in a given learning dataset, presented to the NN during a single learning
epoch. Index j denotes a winning neuron (the first during sorting operation) in a given learning cycle.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 15

Fig. 15. Simulation results of the NG NN for different values of the deadband (DB): (a) ideal case in which DB = 0, (b) DB = 0.05 [%] (5 nA), (c) DB =
0.2 [%] (20 nA), (d) DB = 0.3 [%] (30 nA). At the top of figure – example learning dataset composed of 10 0 0 learning patterns divided into 10 classes.

5.2. Investigations of the learning process of the NG NN

Figs. 15–17 present selected simulation results of the learning process of the NG NN. The presented tests have been
carried out for different sizes of the NN, for example numbers of neurons equal to 20, 50 and 100. The main observed issue
was the influence of the insensitivity level (measured in nA) of the circuit on the quality of the learning process. We refer
to the insensitivity level to as the width of the deadband (DB), as in this area the sorting operation is not performed. All
input signals whose values differ by less than the width of the DB are placed on the same location in the sorted list.
Fig. 15 presents the locations of particular neurons after completing the learning process, for different widths of the DB.
The DB is expressed in [%] of the maximum range of the input currents. For example, DB = 0.3 [%] means that for 10 μ A,
the insensitivity level equals 30 nA. In the presented case the number of neurons equals 20, while the learning dataset is
composed of 10 0 0 learning patterns divided into 10 classes. The learning process lasted 20 epochs i.e. each learning pattern
has been presented to the NN 20 times in a random order. Diagrams (a)–(d) of Fig. 15 show the outcomes of the learning
process for different values of the DB. Fig. 15(a) presents the results for the optimal case, in which there is no DB. Fig. 15(b)
increases the DB to 5 nA, which eliminated seven neurons, from the initial 20. These neurons have been glued with other
neurons. The DB at the level of 30 nA (Fig. 15(d)) is in this case a critical value. Small values of DB are acceptable in case if
the number of neurons is larger than the number of classes.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

16 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

Fig. 16. Quantization error for different values of the DB, for different value of neurons: (a) 20, (b) 50, (c) 100.

Fig. 17. Quantization error vs. neighborhood radius λ for different values of the DB, and for different numbers of neurons: (a) 50, (b) 100.

The impact of the DB is also shown in Fig. 16 as the Qerr for particular values of the DB, for different numbers of neurons.
As can be noticed, if the value of the DB is below 0.2 [%] (for 20 neurons), the Qerr after the learning process is acceptable.
For larger values of DB the learning process is disturbed. One can observe that for larger numbers of neurons in the NN
(Fig. 16(b) and (c)) the value of DB should be reduced, to assure the optimal quantization of the input data space. This is
due to the fact that in these cases more neurons have the values of the weights similar to other neurons in the NN, and
thus they are erroneously detected during sorting.
The parameter that also has a direct influence on the quality of the learning process is the radius of the neighborhood
λ. The greater is the value of λ, the larger number of neurons during the adaptation process is pushed toward a given
learning pattern X (according to Eq. (2)), and thus more neurons undergo the gluing effect. If two or more neurons are
glued together, their weights become synchronized and they are modified in the same way. Fig. 17 shows the impact of the
radius of the neighborhood on the learning process for different values of DB. The smaller is the radius of the neighborhood,

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18 17

Fig. 18. Hierarchy circuit used to reduce the influence of the DB.

the NN performs more accurate. In practice, there exists an optimum, since if the radius of the neighborhood is reduced to
zero, the Qerr raises again.

5.3. Proposed improvements of sorting circuit

The proposed sorting circuit is an asynchronous solution. The reaction time of the wired-OR gate and other components
of the circuit is the source of the refractory time, which in turn has a direct influence on the width of the DB. The refractory
time (RT) is more or less constant for a given realization of the circuit, i.e. for a given technology. However, its value also
depends on the transistor model (P), the values of the supply voltage (V) and the environment temperature (T). If we know
the value of RT, we can control the width of the DB throughout adjusting the slope of the ramp reference signal. This is an
advantage, as it creates the opportunity to indicate a worst case scenario for given PVT parameters, for which the circuit
always sorts a given set of the input signals in the same order.
To reduce the problems with the DB we propose an additional block in the previously described SWSC. This block in-
troduces a hierarchy to the circuit, as follows: If two or more neurons are located at the same position in the sorted list (a
given position in all SRs) the hierarchy circuit blocks all neurons that are located at lower locations in hierarchy. The struc-
ture of the circuit is shown in Fig. 18. The number of such blocks in the circuit equals n. Particular corresponding output of
all SRs are joined to a single hierarchy block. Fig. 18 shows the proposed hierarchy circuit for an example case of the first
outputs of the 8-bit registers: Q11, Q21, ..., Q81. In the new approach we modify the names of the outputs of the registers
into: Q11 , Q21 , ..., Q81 , while Q11, Q21, ..., Q81 become the outputs of the hierarchy circuit. The circuit works as follows:
If, for example, the Q11 and the Q21 signals equal ‘1’, the value of Q11’ is propagated along the chain of the NOR-NOT
gates. As a result, the left inputs and the outputs of all AND gates become ‘0’.
The hierarchy circuit does not eliminate completely the influence of the DB, however if the DB is kept relatively small, the
gluing effect not visible. We have verify it by means of the simulations performed in the realized software model of the NN.

6. Conclusions

A novel, current-mode, asynchronous and parallel sorting circuit has been proposed in the paper. The circuit is to be used
in analog unsupervised neural networks trained in accordance with the Neural Gas algorithm. Since the number of neurons
in such networks can be large, therefore the circuit has been designed in such a way to enable sorting even large number
of neurons. In fact, since the radius of the neighborhood is usually much smaller than the number of neurons, we do not
need to sort all neurons. The important is the positions of only first λ neurons, while remaining neurons do not need to
be sorted. The proposed circuit supports such feature in opposite to typical software implementations, in which all neurons
have to be sorted or at least checked λ times.
The proposed circuit is versatile. It can be used in the role of nonlinear Min, Max or Median filters. All these operations
are performed at the same time.
The circuit has been implemented in the TSMC 180 nm CMOS technology and verified by means of transistor level
simulations. While sorting eight input signals (an example case) with the frequency of 200 kHz, the circuit dissipates an
average power of 300 μ W. To verify the robustness of the proposed solution, we performed comprehensive tests (corner
analysis) for different value of the process, voltage and temperature variation.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030
JID: AMC
ARTICLE IN PRESS [m3Gsc;February 28, 2017;11:52]

18 T. Talaśka, R. Długosz / Applied Mathematics and Computation 000 (2017) 1–18

References

[1] T.M. Martinetz, S.G. Berkovich, K. Schulten, Neural-gas network for vector quantization and its application to time – series prediction, IEEE Trans.
Neural Networks 4 (1) (1993) 558–569.
[2] D. Borkowski, A. Wetula, A. Bien, Contactless measurement of substation busbars voltages and waveforms reconstruction using electric field sensors
and artificial neural network, IEEE Trans. Smart Grid PP (99) (2015) 174–183.
[3] S. Osowski, K. Siwek, The selforganizing neural network approach to load forecasting in the power system, in: International Joint Conference on Neural
Networks, vol. 5, 1999, pp. 3401–3404.
[4] S. Osowski, T.H. Linh, ECG beat recognition using fuzzy hybrid neural network, IEEE Trans. Biomed. Eng. 48 (11) (2001) 1265–1271.
[5] T. Martinetz, K. Schulten, A ‘neural gas’ network learns topologies, Artificial Neural Networks, Elsevier, 1991, pp. 397–402.
[6] T. Martinetz, S. Berkovich, K. Schulten, ’Neural-gas’ network for vector quantization and its application to time-series prediction, IEEE Trans. Neural
Networks 4 (4) (1993) 558–569.
[7] T. Martinetz, K. Schulten, Topology representing networks, Neural Networks 7 (3) (1994) 507–522.
[8] B. Fritzke, Advances in Neural Information Processing Systems 7, third ed., MIT-Press, 1994.
[9] T. Kohonen, Self-Organizing Maps, third ed., Springer, Berlin, 2001.
[10] J.S. S. Marslanda, U. Nehmzowc, Grows when required, Neural Networks 15 (8–9) (2002) 1041–1058.
[11] A.E. Y. Prudent, An incremental growing neural gas learns topologies, in: IEEE International Joint Conference on Neural Networks, 2005, pp. 1211–1216.
[12] M. Kolasa, T. Talaśka, R. Długosz, A novel recursive algorithm used to model hardware programmable neighborhood mechanism of self-organizing
neural networks, in: Applied Mathematics and Computation, vol. 267, Elsevier, 2015, pp. 314–328.
[13] J. Madrenas, D. Fernandez, J. Cosp, A low-voltage current sorting circuit based on 4-t min-max CMOS switch, in: IEEE International Conference on
Electronics, Circuits, and Systems (ICECS), 2010, pp. 351–354.
[14] L. Gu, S. Bingxue, A expansible current-mode sorting integrated circuit for pattern recognition, in: International Joint Conference on Neural Networks,
1999, pp. 3123–3127.
[15] S. Rovetta, R. Zunino, Minimal-connectivity circuit for analogue sorting, in: IEE Proceedings - Circuits, Devices and Systems, 1999, pp. 108–110.
[16] G. Oddone, S.R. and. G. Uneddu, R. Zunino, Mixed analog-digital circuit for linear-time programmable sorting, in: IEEE International Symposium on
Circuits and Systems, 1997, pp. 1968–1971.
[17] R. Długosz, M. Kolasa, W. Pedrycz, M. Szulc”, Parallel programmable asynchronous neighborhood mechanism for Kohonen SOM implemented in CMOS
technology, IEEE Trans. Neural Networks 22 (12) (2011) 2091–2104.
[18] R. Długosz, T. Talaśka, W. Pedrycz, R. Wojtyna, Realization of the Conscience Mechanism in CMOS Implementation of Winner-Takes-All Self-Organizing
Neural Networks, IEEE Trans. Neural Networks 21 (6) (2010) 961–971, doi:10.1109/TNN.2010.2046497.
[19] R. Długosz, T. Talaśka, W. Pedrycz, Current-mode analog adaptive mechanism for ultra-low-power neural networks, IEEE Trans. Circuits. Syst. II: Express
Briefs 58 (1) (2011) 31–35.
[20] R. Długosz, T. Talaśka, Low power current-mode binary-tree asynchronous min/max circuit, Microelectron. J., Elsevier 41 (2010) 64–73. http://dx.doi.
org/10.1016/j.mejo.20 09.12.0 09.
[21] F.A.K.B. Khalifa, M. Bedoui, Parallel FPGA implementation of self-organizing maps, in: International Conference on Microelectronics (ICM), 2004,
pp. 709–712.
[22] W. Kurdthongmee, SOM neural network design – a new simulink library based approach targeting FPGA implementation, J. Syst. Archit. 54 (10) (2008)
983–994.
[23] A. Tisan, M. Cirstea, A novel hardware-oriented Kohonen SOM image compression algorithm and its FPGA implementation, Math. Comput. Simul.,
Elsevier 91 (2013) 134–149.
[24] O. Shin-Hong, L. Chi-Sheng, L. Bin-Da, A scalable sorting architecture based on maskable WTA/MAX circuit, in: IEEE International Symposium on
Circuits and Systems, 2002, pp. IV-209–IV-212.
[25] P. Bengough, S. Simmons, Sorting-based VLSI architectures for the m-algorithm and t-algorithm trellis decoders, IEEE Trans. Commun. 43 (2/3/4) (1995)
514–522.
[26] M. Kolasa, R. Długosz, W. Pedrycz, M. Szulc, A programmable triangular neighborhood function for a Kohonen self-organizing map implemented on
chip, Neural Networks 25 (2012) 146–160. http://dx.doi.org/10.1016/j.neunet.2011.09.002.
[27] T. Talaśka, M. Kolasa, R. Długosz, W. Pedrycz, Analog programmable distance calculation circuit for winner takes all neural network realized in the
CMOS technology, IEEE Trans. Neural Networks and Learn. Syst. 27 (3) (2016) 661–673.
[28] T. Talaśka, M. Kolasa, R. Długosz, P. Farine, An efficient initialization mechanism of neurons for winner takes all neural network implemented in the
CMOS technology, Appl. Math. Comput. Elsevier 267 (2015) 119–138.

Please cite this article as: T. Talaśka, R. Długosz, Analog, parallel, sorting circuit for the application in Neu-
ral Gas learning algorithm implemented in the CMOS technology, Applied Mathematics and Computation (2017),
http://dx.doi.org/10.1016/j.amc.2017.02.030

Das könnte Ihnen auch gefallen