Beruflich Dokumente
Kultur Dokumente
Author to whom correspondence should be sent: Jong-Chen Chen Ph: +886-5-534-2601 ext. 5300 (dept.) +886-5-534-2601 ext. 5332 (office) +886-5-551-2762 (home) FAX: +886-5-531-2077 email: jcchen@mis.yuntech.edu.tw
Revised:
Feb., 2001
Toward an evolvable neuromolecular hardware: a hardware design for a multilevel artificial brain with digital circuits
Abstract: A biologically inspired neuromolecular architecture implemented on digital circuits is proposed in this paper. of information processing. Digital machines and biological systems provide different modes The former are designed to be effectively programmable, whereas Previously, we developed a multilevel computer The experimental
results showed that this self-organizing model has long-term evolutionary learning capability that allows it to learn in a continuous manner, and that the function of the system changes as its structure is altered. Malleability and gradual transformability play an important role in The implementation of this model on digital circuits
would allow it to perform on a real-time basis and to provide an architectural paradigm for emerging molecular or neuromolecular electronic technologies. keywords: evolutionary adaptability, artificial brain, multilevel evolutionary learning, evolvable hardware
1. Introduction Our brain is a highly activated, asynchronous concurrent network. This network has
significant information processing capability that allows us to think, imagine, dream, and so on. In contrast, conventional digital computers have excellent computational power for
performing an enormous amount of repetitive work and a variety of information processing tasks ranging over a wide spectrum of applications. Conrad [19] indicated that the major
dichotomy between brains and machines is the ability to evolve versus programmability. Evolution by variation and selection is the foundation of nat e pol -solving u rb m rs e method [23]. In biological systems, functions and structures are closely related [15]. That
is, when the structures of a system are altered, its functions (or behaviors) change accordingly. Evolvability and a close structure-function relationship provide organisms with the malleability (gradual transformability) to cope with environmental changes (i.e., noisy environments) and to learn new survival strategies for uncertain environments (i.e., new environments). In recent years, the application of evolutionary computational techniques to The major
different problem domains has gained more attention and grown rapidly.
contributions were made by evolutionary optimization procedures [1], the evolutionary programming approach [29], evolutionary strategies [57,59], and genetic algorithms [30,41]. Unlike biological systems, conventional computers are deficient in coping with problem change [15,20,79]. A slight modification in a computer program can easily produce an Usually, reprogramming is inevitable with only However, as advocated by Turing [71], there does
exist an effective procedure (or program) that can simulate (or solve) any problem as long as it can be defined (or described) in a formal, precise manner. This means that conventional
computers have an effective programmability that allows us to simulate any physically realizable process in nature [16].
As indicated above, evolvablity is an important feature in our brain that allows adaptability. Conventional computers have an effective programmability that allows us to One of the ultimate goals is to integrate the merits
of information processing mechanisms provided by both brains and computers (which might be called a brain-like computer) into a system, which might generate synergistic effects that cannot be performed by a brain or computer alone. However, the principles of biological While the
success of a real brain-like computer may seem faraway, a feasible approach is to employ some possible information processing mechanisms understood from our brain, develop a system based on this, and perform a variety of experiments. projects have been conducted along this line. A vast number of research
models, evolutionary neural models, evolvable hardware, molecular computing, molecular electronics and neuromolecular systems. Connectionist models [32,33,36,45,49,73,74], which attempt to use the strength of connections among neurons to represent information, are the most well-known neural models. A number of investigators further applied evolutionary learning techniques to connectionist models [46,58,63-65,75,77,78,81,82] and to intraneuronal models [23,24,46,47]. The
advantages of evolutionary design over human design can be found in Yao and Higushi [80]. The above models have more flexible learning capabilities than classic artificial intelligence (AI) models and are applicable to a variety of problem domains. developed so far are software simulation systems. However, most models
population of networks, in particular an ensemble of evolutionary neural networks. studies on evolvable hardware have thus emerged.
As pointed out by Yao [79], there is no unanimous definition of evolvable hardware at this moment. He defined it as architectures, structures, and functions that can change
dynamically and autonomously to perform specific tasks, but with a constant hardware architecture [79]. Simulated evolution and reconfigurable hardware are two major aspects de Garis [25-27] further divided evolvable hardware into two The former simulates evolution using software while the
of evolable hardware.
Sipper et al. [61] proposed two reconfigurable architectures inspired by evolution and ontogeny. Higuchi and his colleagues [39,40] have been working on the development of
evolvable hardware chips for different applications; an analog chip for cellular phones, a clock-timing chip for Giga hertz systems, a chip for autonomous reconfiguration control, a data compression chip, and a chip for controlling robotic hands and navigation. Murakawa
et al. [55] presented an evolvable hardware for neural network applications by reconfiguring the network topology and node functions in order to adapt the dynamics for a specific problem domain. de Garis [25-27] developed an artificial brain that can assemble a great
number of cellular automata-based neural net modules and in the future may control the bhv r f k t bt eai o a ie r o . o tn o It should be noted that connectionist models, including most evolutionary neural networks and evolvable hardware, emphasize the connections among neurons based on the Hebbian rule and omit information processing inside the neurons. Roughly, they consider The intelligence of In general, these When learning is
the neurons to be simple on/off threshold units with a simple firing rule. these models is mediated primarily by exchanging signals among neurons. models have a common underlying structure (i.e., map to one another).
completed, input patterns are translated into the strength of the connections among the neurons. The patterns will interfere with one another because they are coded based on the This has been called the superposition
network increases.
neuromolecular models that will be described in this study shift the emphasis to an intraneuronal form of information processing. In the early 1970s, Conrad proposed some molecular information processing architectures motivated from some modes of information processing in the human brain [10-13]. This line of work was further developed into the idea of molecular computers A number of researchers [2,3,42-44,68-70] have tried to develop
[16-18,20,22].
carbon-based computing devices (so-called biocomputers) by using actual biological materials. However, the realization of biocomputers is still in a very early stage for at least First, biological materials have not been considered seriously for
The artificial neuromolecular (ANM) model that we developed earlier [6,7] was motivated by two molecular architectures [11-13]. features. This model has three distinguishing
The first is that the input-output behavior of the neurons is controlled by complex The second
internal dynamics that reflect the molecular mechanisms inside real neurons.
feature involves neurons that have hierarchical controls that make it possible to manipulate collections of neurons. Finally, the model is an open evolutionary architecture that has a
rich potential for the evolution of a variety of behaviors that could significantly expand the problem domains to which neural computing is applicable. In principle, this openness
should allow the model to address a broader class of problems than purely connectionist models do. However, this is still a virtual machine that runs on top of a serial digital
computer and is therefore subject to practical computational limitations. Section 2 describes the neuromolecular architecture and the previous experimental results. Section 3 explains the detailed architecture of the intraneuronal dynamic model
Section 5 describes a hardware design of the biologically motivated Section 6 is our concluding remarks.
2. The ANM system 2.1 Brief description of the system The ANM system is an artificial brain that provides a rich platform for evolutionary learning. The artificial brain is comprised of a network of neuron-like modules with internal The dynamics reflect molecular processes believed
to be operative in real neurons, in particular processes connected with second messenger signals and cytoskeleton-membrane interactions. The objective is to create a repertoire of special-purpose pattern processors through an evolutionary search algorithm and then to use memory manipulation algorithms to select combinations of processors from the repertoires that are capable of performing coherent functions. The system, as implemented presently, consists of two layers of memory access
neurons (called reference neurons) and one layer of intraneuronal dynamic neurons (called cytoskeletal neurons) divided into a collection of functionally comparable subnets. Evolutionary learning can occur at the intraneuronal level through variation-selection in the cytoskeletal structures responsible for the integration of signals in space and time. The
memory manipulation algorithms that orchestrate the repertoire of neuronal processors also use evolutionary search procedures, and are well suited for operating in an associative mode as well.
2.2 Previous experimental results By adjusting the input/output interfaces, the ANM system has been linked to a number
of problem domains, including maze navigation, bit pattern recognition, Chinese character recognition, and chronic hepatitis B diagnosis. Previous investigations on the malleability of this system showed that its function cagsnacrac wt cagsnt ss m s t c r [] T e xe m n leu s hne i codne i hne i h yt sut e 9. h epr et r l h e e r u i a st also provided the information about the fitness landscape implicit in t ss m s t c r h yt sut e e e r u that facilitates evolutionary learning [4,9]. increases as its structural complexity increases. The evolution friendliness of this system This was investigated by adding more types
of cytoskeletal fibers, allowing weaker interactions, and increasing redundancy [7]. The integration of intra- and interneuronal information processing also plays a vital role. These two types of information processing yield significant computational and learning synergies [6]. The integrated system effectively employs synergies among different levels
of learning [4]. With the above features, the system is able to learn continuously in complex problem domains and is effective in coping with problem changes [4,9]. Choosing significant features for differentiating data and insignificant features for tolerating noise is not an easy problem for any intelligent system. Our experimental results
showed that the system exhibits an effective self-organizing capability in striking a balance between pattern categorization and pattern generalization [5,8]. In the diagnosis of hepatitis
B patient data application, this system showed itself to be well suited for differentiating chronic hepatitis B patients from healthy individuals and for investigating what would be the significant parameters in determining if one is infected with chronic hepatitis B [5].
2.3 The ANM architecture The artificial brain is comprised of two complementary neuromolecular models: reference neurons and cytoskeletal neurons. The following only explains the connections Intraneuronal information processing will
be discussed in section 3. The neuromolecular architecture has 256 cytoskeletal neurons, divided into eight comparable subnets. Each subnet consists of 32 cytoskeletal neurons. By comparable
subnets, we mean that the input/output neuronal connections and intraneuronal structures of each subnet are similar or the same (the detail will be described in the next section). As
shown in Fig. 1, these 256 cytoskeletal neurons are controlled by two layers of reference neurons (8 high-level reference neurons and 32 low-level reference neurons). Each
high-level reference neuron controls a collection of low-level reference neurons, which in turn controls a bundle of comparable cytoskeletal neurons. A high-level reference neuron will
therefore control a particular combination of cytoskeletal neurons through low-level reference neurons.
R2
R3
...
R8
r2
r3
...
r32
cytoskeletal E1 neurons
E2 . . . E32 subnet1
E1
E2 . . . E32 . . . E1 subnet2
E2 . . . E32 subnet8
Fig. 1. Connections between reference and cytoskeletal neuron layers. Low-level reference neurons select cytoskeletal neurons in each subnet that have similar cytoskeletal structures. High-level reference neurons select different combinations of the low-level reference neurons.
The reference neuron scheme [13] is a memory manipulation model. This approach correlates with some suggested hippocampal function mechanisms. involve synaptic facilitation, as in Hebbian models. firing neurons that it contacts at the same time. These mechanisms
time-ordered memories, content-addressable memories, associative memories, control of circuit selection, and neuron orchestration. complex association structures. With these mechanisms it is possible to build up
memory functions that are used in our model (to be explained below). Reference neurons can be used to control network selection. Signals emanating from
reference neurons inhibit and excite a set of networks in a manner that allows only one to be active at any instant in time. This feature is important when we need to evaluate the
performance of each comparable subnet individually and alternately. Orchestration is an adaptive process mediated by varying neurons in the assembly that selects good performing combinations of neurons. The objective of orchestration is to select In the We
ANM system, orchestration occurs between high-level and low-level reference neurons.
note that only cytoskeletal neurons selected by reference neurons are allowed to perform pattern transduction.
2.4 Input-output interface This system had 64 receptor neurons and 32 effector neurons when first constructed [6]. The neuronal connection patterns of each comparable subnet are the same (Fig. 2). This
ensures that comparable cytoskeletal neurons in each subnet (i.e., neurons having similar intraneuronal structures) will receive the same inputs from receptor neurons and that the ss m s u u a t sm w e t fi ptrs f ahsbe a t sm . E c yt ot t r h a e hn h in ae o ec unt r h a e ah e ps e e e r g tn e e effector neuron is controlled by eight comparable cytoskeletal neurons (i.e., one from each comparable subnet). We note that an effector neuron fires when one of its controlling
neuron group firing is defined as the output associated with an input pattern.
initial effector neuron-firing group is the same as the group determined by a particular problem domain, the system makes a correct response. responses made by the system, the higher its fitness. system is shown in Fig. 3. The greater the number of correct
receptor neurons
I1
I2
I3
I4
..........
I64
subnet2 E32 E1 E2
...
...
E32
effector neurons
O1
O2
...
O32
Fig. 2 Input/output interface of comparable cytoskeletal subnets. The connections between receptor neuron and cytoskeletal neuron layers are randomly decided initially, but vary as learning proceeds. The connections between cytoskeletal neuron and effector neuron layers are fixed.
10
each pattern
group of a pattern
receptor neurons
I1 I2 . . . I64
O1 O2 . . . O32 Ok
first firing effector neuron
cytoskeletal neurons
no
ANM
3. Intraneuronal dynamics 3.1 Biological evidence Experimental studies utilizing a variety of techniques suggest that chemical and molecular processes within neurons play a significant role in controlling neural firing [28,37,50-53]. Rapid depolarizing effects induced by the microinjection of second
messenger molecules (cAMP) led to the suggestion that the cytoskeletal motions influence ion channels [52,53]. Presumably cAMP acts on microtubule associated proteins to trigger signal flow in the cytoskeleton or to alter the flow of signals arising from other sources. This conclusion is supported by ultrafast electron microscopic studies that correlate ion channel activity with cytoskeletal dynamics [54]. The cytoskeleton has three major components: microtubules, microfilaments (e.g., actin filaments), and intermediate filaments (referred to as neurofilaments in neurons). Microtubules and microfilaments, composed of simple tublin polymers (alpha and beta) and actins respectively, might interact with one another via microtubule associated proteins
11
[34,35,56,60].
microfilaments via some of their binding proteins [62,66,67,72]. However, the real interaction among the three major filaments of the cytoskeleton is not at present well understood. membrane. It is The cytoskeleton extends throughout the cell and underlies the of exhibiting [54]. structural changes associated switching with [38],
capable
polymerization-depolymerization
processes
Conformational
propagating conformational changes [21], vibratory motions of the sound wave type [53], electric-dipole oscillations of the Frhlich type [31,37], and membrane mediated interactions [48] have also been suggested as possibilities. These and other mechanisms could Obviously the
conceivably coexist, allowing for different modes of signal transmission. cytoskeleton is an extremely complex system.
3.2 The cytoskeletal neuron model The cytoskeletal neuron is motivated by the biological evidence described above. simulated with a two-dimensional grid. It is
Signals impinging on a neuron are transduced into the cytoskeletal signal flows.
compartment of a cytoskeletal neuron receives an external signal, a cytoskeletal signal will be generated in a component of the cytoskeleton and transmitted to its neighboring compartments at a specific rate. In the meantime, the signal will decrease over time. When
a cytoskeletal component is activated and there are some kinases sitting in the same compartment, a cytoskeletal neuron will fire. A kinase thus serves as a readout enzyme that can recognize a subset of input patterns. Adding or deleting a kinase will as a consequence add or delete the set of patterns to which a neuron responds. All input patterns in space and time that trigger a neuron to fire are Relocating a kinase to a neighboring compartment could in
12
some cases hold the set of patterns recognized by a neuron constant, but in general it would alter the input-output behavior of neurons by advancing or delaying its firing timing. The
power of a cytoskeletal neuron is that it is capable of transducing a set of spatiotemporal input patterns into temporal output patterns. The following explains how the signal integration features in the cytoskeleton are captured. As indicated above, a cytoskeletal signal flow is initiated when an external signal For example, in Fig. 4, the activation of the readin
enzyme at location (2,2) will trigger a cytoskeletal signal flow transmitted along the second column of the C2 components, starting from location (2,2) and running to location (8,2). An activated component will affect the state of the various types of neighboring components if there is a MAP (microtubule associated protein) linking these components together. For example, in Fig. 4, the activation of the readin enzyme at location (3,7) will
trigger a cytoskeletal signal flow transmitted along the seventh column of the C1 components, starting from location (3,7) and running to location (6,7). When the signal arrives at The activation We That
location (4,7), it will activate the component at location (4,8) via the MAP.
of this component will in turn trigger a signal flow travelling along the eighth column. assumed that the interactions between two neighboring components are asymmetrical.
is, the activated component at location (4,8) is not sufficient to activate the component at location (4,7). The other assumption was that different types of components transmit For example, C1 components transmit signals at the slowest The transmittion
13
j location (i, j) 1 2 i 3
C2 C2
1 2
3 4
C3 C3 C1 C3 C1
5 6
7 8
C1
4 C1 C2 C1 C3 C1 C2 C1 C3 5 C1 C2 C1 C3 C1 C2 C1 C3 6 C1 C2 C1 C3 C1 C2 C1 C3
C2
Fig. 4. A cytoskeletal neuron. Each grid location, referred to as a site, has at most one of three types of components: C1, C2, or C3. Some sites may not have any component at all. Readin enzymes could reside at the same site as any one of the above components. Readout enzymes are only allowed to reside at the site of a C1 component. Each site has eight neighboring sites. The neighbors of an edge site are determined in a wrap-around fashion. Two neighboring components of different types may be linked by a MAP.
14
When a requisite spatiotemporal combination of cytoskeletal signals arrives at a readout enzyme site, the neuron will fire. For example, in Fig. 4, there are three possible signal The first signal
flows that might reach and activate the readout enzyme at location (8,3).
flow is the one transmitted along the second column, activated either by the readin enzyme at location (2,2) or by the enzyme at location (3,2). The second signal flow transmits along The third signal flow transmits
along the fourth column, activated either by the readin enzyme at location (1,4) or by the enzyme at location (4,4). When two out of the three signal flows reach location (8,3) within
a short period of time, they will activate the readout enzyme sitting at the same location. The activation of the latter will in turn cause the neuron to fire. fire at different times for two reasons. different types of components. enzymes. We have explained how to capture the signal integration feature in the cytoskeleton. The following explains how cytoskeletal dynamics are implemented with cellular automata. Each cytoskeletal component has six possible states: quiescent (q0), active with increasing levels of activity (q1, q2, q3, and q4), and refractory (qr). A component in the highly active However, the neuron might
state (q3 or q4) will return to the refractory state at the next update time for that component type. The next state for a less active component (q0, q1, or q2) depends on the sum of all stimuli received from its active neighboring components (with each component type having its own update time). The detailed state transition rules are illustrated in Fig. 5. A A
component in the refractory state will go into the quiescent state at its next update time.
component in the refractory state is not affected by its neighboring components until its refractory period is over.
15
(a) C1 component
q2
s2
q0
s 1, s 2 s1
q4
s3 s1
q1
s3 s3 s2
q3
s 1, s 2, s 3
(b) C2 component
s3
q0 q1
q2
s 1, s 2 s 1, s 2, s 3
s 1, s 2, s 3
q 3,q 4
(c) C3 component
q0
q2
s 1, s 2, s 3
q1
s 1, s 2, s 3
q 3,q 4
s 1, s 2, s 3
Fig. 5. Transition rules of the components. S1, S2, and S3 indicate a signal from a highly activated component C1, C2, and C3, respectively. For example, if C1 in the state q0 receives an S2 signal it will enter the moderately activated state q2. If it then receives an S3 signal it will enter the more activated state q3.
16
4. Multilevel learning Six levels of evolutionary learning are allowed in this system. They are at the initiating
signal-flow level (controlled by readin enzymes), responding to signal-flow level (controlled by readout enzymes), controlling signal-flow level (controlled by MAPs), transmission signal-flow level (controlled by cytoskeletal components), responding to external-stimuli level (determined by the pattern of connections to receptor neurons), and
intraneuronal and occur inside cytoskeletal neurons, whereas the last two levels are interneuronal. Intraneuronal evolutionary learning has three major steps (Fig. 6). each subnet is evaluated first. The performance of
Finally, the readout enzyme, readin enzyme, MAP, or component patterns are copied (with variation) from the best-performing subnets to the lesser-performing subnets, depending on which level of evolution is occurring. Evolutionary learning at the level of responding to As above, the performance of each Finally, the
external stimuli comprises three steps, too (Fig. 6). subnet is evaluated first.
connections between receptor neuron and cytoskeletal neuron layer patterns are copied (with variation) from the best-performing subnets to the lesser-performing subnets. Evolutionary learning at the reference neuron level also comprises three steps (Fig. 7). First, cytoskeletal neurons controlled by each high-level reference neuron (through low-level reference neurons) are activated in sequence for evaluating their performance. Secondly,
the patterns of neural activities controlled by the best-performing reference neurons are copied to the lesser-performing reference neurons. Finally, the lesser-performing reference
neurons control slight variations in the neural groups controlled by the best-performing reference neurons, assuming that some errors occur during the copy process.
17
In the current implementation, only one level is opened for learning at a time while the other levels are turned off. Each level is opened for 16 learning cycles. Our approach is to The level
turn on each level in an alternating manner until the simulation is terminated. opening learning sequence is shown in Fig. 8.
described above does not mean that the fitness assigned to the reference neurons is independent of the properties of the cytoskeletal neurons. Evolutionary learning at the
cytoskeletal neuron level alters the performance characteristics of the collection of neurons (or combination of bundles) that the reference neurons control. This alters the fitness of the
collection and therefore the fitness of the reference neuron that provides access to this collection. Also, it should be noted that the mechanism of controlling the evolutionary Indeed, it would be interesting to investigate the impacts
of varying the number of learning cycles assigned to each level and the level opening sequence on the learning in the future. Previous experimental results [4,7] showed that the information processing capability of this system increases as more levels of learning are allowed. levels of evolution contribute most to the learning. We further examined what
the contributions are made by several levels of evolution in the early stage of learning, and that fitness increases only at certain levels in the later stage of learning. synergy only occurs in a selective manner. This suggested that
kind of contribution made by each individual level of learning for the following two reasons. First, the significance of each level varies as input data (or problem domains) change. Secondly, synergies among different levels of learning suggest that learning at one level open up opportunities for another. each individual level. It is thus that we are not able to assign appropriate credit to
18
a. evaluate E1 E2 E3 E4 E1 E2 E3 E4
subnet1
subnet2
readin, readout, MAP, component, connections to receptor neurons
b. copy
E1
E3 E2 subnet1
E4
E1
E3 E2 subnet2
E4
c. vary E1 E2 E3 E4
variant E1 E2 E3 E4
subnet1
subnet2
r1
r2
r3
r4
r1
r2
r3
r4
r1
r2
r3
r4
ref. neuron
ref. neuron
(readout)
time
16 cycles
16 cycles
16 cycles
16 cycles
16 cycles
16 cycles
16 cycles
16 cycles
5. Digital hardware In this section, we will explain a hardware design of the central architecture of the ANM system (i.e., cytoskeletal neurons and reference neurons) on digital circuits.
19
5.1 Cytoskeletal neurons As shown in Fig. 4, the cytoskeleton is represented with a 2-D (8X8) grid structure. Cytoskeletal dynamics were simulated with 2-D cellular automata [76]. Each grid location is simulated by a clocked sequential circuit (referred to as a processing unit, PU). there are sixty-four synchronous PUs for each cytoskeletal neuron. neighboring PUs. In total,
Each PU has 8
For any two neighboring PUs, there are two possible unidirectional connections between them (i.e., one and its opposite directions). send outputs to its eight neighboring PUs. Each PU consists of three departments: input, process, and output (Fig. 9). The input This allows each PU to take signals from and
department receives information from its neighboring PUs and sends its outputs to the process department. The latter integrates signals from either its input department or receptor neurons into an output signal for the output department, which in turn sends its outputs to all neighboring PUs. As indicated earlier, the cytoskeleton model includes the following components: microtubules, neurofilaments, microfilaments, microtubule associated proteins (MAPs), readin enzymes, and readout enzymes. these components on digital circuits. The following explains how to implement each of It should be noted that our aim in this study was to An
provide the basic layout for implementing the ANM with conventional digital circuits. optimized circuit design layout has not been completed yet.
20
PUs
. . .
input dept.
process / dept. 1
output dept.
8
neighboring
PUs
DPG 1
accumulator
bounder
Fig. 9.
5.1.1 Input department As indicated earlier, the input department plays the role of converting signals from neighboring PUs into signals for the process department. It has two major functions. The The
first is to determine the type of influence a neighboring signal has on the current PU. second function is to control the signal conversion timing. As noted above, each PU has 8 neighboring PUs.
hold information coming from neighboring PUs (one latch for each PU).
held in each D-latch is decoded by a corresponding 2x4 decoder for determining the type of influence a neighboring signal has on the current PU. As indicated in section 3, biological evidence suggests that the cytoskeleton is comprised of three types of fibers: microtubules, microfilaments, and neurofilaments. Our assumption
[6] was that the cytoskeletal fibers play the role of signal transmission and integration, which in turn controls the firing activity of a neuron. In addition, we assumed that signals
transmitted along microtubules (denoted by C1 in Fig. 4) represent major signal flows in the cytoskeletal neuron and have the greatest impact on the other two types of components. In
contrast, signals transmitted along microfilaments (denoted by C3 in Fig. 4) play the role of
21
modulating major signal flows in the cytoskeletal neuron and have the least impact on the other two types of components. Neurofilaments also serve as the role of modulating major In
signals, but with more impact on the other two types of components than neurofilaments.
summary, the types of influence for signals from neighboring fibers are divided into three categories: strong, intermediate, and weak (denoted by S, I, and W in Fig. 10, respectively), as shown in Table 1. M17 M24 evolve at MAP level
input control
S0 S1 S2
I1 3X8 . decoder . .
I8
M1 D
. . . D
flip-flop
unused
unused
Table 1. Influence type of a neighboring signal on a PU type of a type of current PU neighboring PU C1 C2 C1 strong strong C2 intermediate strong C3 weak intermediate C3 strong strong strong
For each connection, two bits are used to specify the influence of a neighboring signal on a processing unit. For eight neighboring connections, sixteen bits are required (denoted by As indicated earlier, we allow evolutionary learning to occur at the
22
Indirectly, this would change the signal influence type from and to the
neighboring PUs F r xm l lasm t thr ia whose component type is C1. . o ea p , t s eh t es PU e es u a e As shown in Table 1, it has the greatest impact on its neighboring PUs. However, its impact
becomes much smaller if its component type is altered from C1 into C3. This belongs to the first level of learning in this system. For any two neighboring PUs, the connection is defaulted if they belong to the same component type. This would allow signals to transmit along the same component type. If
they belong to different types, the connection is set only when there is a MAP linking them together. For every possible connection to a neighboring PU, one bit is needed to indicate Eight bits are required to setup the MAP The MAP pattern linking This belongs
different types of PUs is allowed to change as evolutionary learning proceeds. to the second level of learning.
The other function of the input department is to control the timing of signal conversion for each neighboring signal arriving at the input department into signals for the process department. The input department polls these latches in sequence such that only one is A counter starting from 0 to 7 is used to
allowed to perform signal conversion at a time. control the timing of signal conversion. done in a sequential manner.
This would speed up the response time, but requires a more complicated circuit design.
5.1.2 Process department The process department has three components: DPG (digital pulse generator), accumulator, and bounder (Fig. 11). The DPG is responsible for converting signals from
23
either receptor neurons or neighboring PUs (through input department) into a sequence of binary signals for the accumulator. The latter adds up these binary signals by using a 3-bit The bounder will increase by 1 if
M 89
DPG /
accumulator
bounder
output dept.
As indicated above, the DPG receives signals from either receptor neurons or its neighboring PUs. The pattern of connections between receptor neurons and each PU might In the current implementation, sixty-four bits (denoted
by M25-M88) are employed to represent the connections between receptor neurons and each PU (one bit for each receptor neuron). This belongs to the third level of learning.
As mentioned earlier, a cytoskeletal signal is initiated when a readin enzyme receives an external signal from any one of these 64 receptor neurons. In other words, there will be no As a consequence, the
existence of a readin enzyme will directly determine whether external signals arriving from receptor neurons are allowed to convert into cytoskeletal signals. Changing the pattern of
24
readin enzymes will thus control the pattern of inputs into the cytoskeletal neurons. belongs to the fourth level of learning.
This
As shown in section 3.2, each cytoskeletal component has six possible states: quiescent (q0), active with increasing levels of activity (q1, q2, q3, and q4), and refractory (qr). In the
current version of this model, a 3-bit binary counter is employed to represent the state of a cytoskeletal component. The counter with 0 represents state q0, 1 represents state q1, 2
represents state q2, 3 represents state q3, 4 represents state q4, 5 represents state qr, and the remaining two values are unused. The counter starts from 0 and increments by one when it
r e e a u e rm t D G A t t cut f ,h cut wls ya t sm e i s 1 pl f cv s o h P . f rh on o 4 t on r i t th a e e e e e e la e state until its next update time. A component in states q3 or q4 will go into the refractory
state (qr) at its next update time, and then go into the quiescent state (q0) at the following update time. As shown in Table 1, there are three types of signals that the DPG might receive. In the
current implementation, we assume that the DPG will generate one, two, and three pulses for the accumulator when it receives a weak, intermediate, and strong signal, respectively. The
DPG has three 6-bit parallel-load/serial-out registers that load data into the registers in parallel and then send these bits out one at a time. For example, in Fig. 12, the first 6-bit
r ie wll dt dt 000 i pr l ( r 1 r r ethe u e wlb e s r i o h a 111n a ll t e s e e n t e 1 pl s i e gt l a e a ae h e ps r s l generated), and then send these bits out one at a time. As mentioned earlier, the accumulator will send its outputs to the bounder. The latter is
used for determining whether a PU is ready for sending outputs to its neighboring PUs or firing a neuron. As shown in Fig. 12, the bounder has two inputs: P and Q. Input P takes
signals from the accumulator while Q is a fixed threshold set up by the system in advance. C r n yt t ehl vl ist t01,er et gt h h at e te 3. ur t,h h so a e s e a 1r e n n h i l cv s t q el e r d u ps i e gy i a are three possible cases between P and Q. There
25
When P is greater than or equal to Q, this means that the PU is Specifically, the neuron will fire when P As indicated earlier,
is greater than Q and there is a readout enzyme sitting at the same site.
only the first firing effector neuron is recorded as an output associated with each input pattern. As a consequence, all PUs will be reset to their initial states when there is a cytoskeletal neuron firing. Through changing the pattern of readout enzymes, we can control the output Like readin enzymes, the pattern of readout enzymes is This belongs to the fifth level of learning.
In addition to the above three major components, the process department has a controller with two functions (see and in Fig. 12). countdown at discrete instants of time. First, it will control the accumulator
activated state q2 will go to the slightly activated state q1 at the next update time if it receives no signal. Similarly, an accumulator in the slightly activated state q1 will go to the quiescent state q0 if it receives no signal. An accumulator in the refractory state is not affected by its
neighbors until its refractory period is over, and will go into the quiescent state at its next update time. The refractory state is necessary to ensure unidirectional propagation.
Secondly, the process department controls the update timing of the accumulator state. Indirectly, this controls the signal transfer timing from the accumulator to the bounder, which in turn determines the PU transmission speed. components transmit signals at different speeds. As indicated earlier, different types of We assume that C1 and C3 components The transmission speed Our somewhat
arbitrary choice is that C3 transmits signals to its neighboring components on the fastest time scale. C2 transmits slightly slower than twice the C3 rate. C1 transmits slightly slower than
26
DPG
0 0 0 111
012345
S input dept. I W
clk
012345 012345
0 0 0 110 0 0 0 100
parallel load/
serial out
/ /
1 1
bounder M 90
3
1 01
/ /
P>Q
5.1.3 Output department As indicated earlier, there are two unidirectional connections between a PU and its neighboring PUs. In section 5.1.1, we explained that M17-M24 (representing the pattern of Similarly, we In total,
MAPs) controls the pattern of signals from neighboring PUs to a specific PU.
need one bit to indicate whether a PU should send outputs to its neighboring PUs. eight bits are required (denoted by M91-M98), as shown in Fig. 13.
connection is defaulted for any two neighboring PUs of the same type.
different types, the connection is set only if there is a MAP linking them together.
described in the input department, the MAP pattern is allowed to change as learning proceeds.
27
M 91 . . . M 98
. . .
5.1.4 Preliminary result To evaluate the performance of the above digital circuits, each PU was simulated and tested with MAX PLUS II system, a digital circuit simulation tool developed by Altera Corporation (San Jose, CA). The result showed that these circuits function as expected.
The simulation results were consistent with those of the ANM system constructed previously. At this stage, we have not yet performed a complete set of experiments to report in the present paper.
5.2 Reference Neurons As shown in Fig. 1, cytoskeletal neurons are controlled by two levels of reference neurons. A low-level reference neuron contacts all cytoskeletal neurons in a given class (i.e., A high-level reference neuron contacts subsets of
two levels of reference neurons are allowed to change when learning proceeds. to the sixth level of learning.
28
5.3 Learning Mechanisms We have shown in section 5.1 that each PU is controlled by 98 bits of memory (M1-M16 for determining the type of influence for signals from neighboring PUs, M17-M24 and M91-M98 for setting up the MAP patterns, M25-M88 for choosing stimuli from receptor neurons, M89 and M90 for deciding the existence of readin and readout enzymes, respectively). For each
cytoskeletal neuron implemented with 8x8 cellular automata, around 6.4 kilobits of memory are required. As mentioned earlier, the ANM system has 256 cytoskeletal neurons in the In total, this would require slightly more than 1.6 megabits of When learning proceeds at the level Then, the
current implementation.
variations from the bit positions representing the best-performing subnets are copied to the lesser-performing subnets. system is terminated. When learning proceeds at the level of reference neurons, only the connections among the two levels of reference neurons are allowed to change in the course of learning. That is, As shown in Fig. 14, the above process is repeated until the
each high-level reference neuron is allowed to change its 32 low-level reference neurons selection. For each high-level reference neuron, 32 bits are needed to specify the pattern of In total, 256 bits (m1-m256) are needed for
connections to the 32 low-level reference neurons. the eight high-level reference neurons.
performance (or fitness) of each high-level reference neuron is determined by the cytoskeletal neurons it selects. The variations from those bits representing the best-performing As shown in Fig.
reference neurons are copied to the lesser-performing reference neurons. 15, the above process is repeated until the system is terminated.
29
Generate the initial repertoire of cytoskeletal neurons Mijk (i: subnet number; j: neuron number; k: memory bit number) Repeat Evaluate the performance of each subnet (For each input pattern, a subnet makes a correct response when the first effector neuron-firing group is the same as the group determined by a specific problem domain. The greater the number of correct responses made by a subnet, the higher its fitness. The detailed procedure of evaluating the performance of a subnet is shown in Fig. 3.) Select three best-performing subnets Copy Mxjk to Myjk (x: best-performing subnet; y: lesser-performing sbe j1 3; : , 9) unt : , 2 k 1 ; , , 8 Mutate Myjk (y: lesser-performing sbe j1 3; dpns nw i l ri l e i unt : , 2 k eed o h h e n g e ls ; , c a n v operative. The range of k is: from 1 to 16 if evolves at the component level from 17 to 24 and from 91 to 98 if evolves at the MAP level from 25 to 88 if evolves at the rec/cyto connection level 89 if evolves at the readin enzyme level ` 90 if evolves at the readout enzyme level) Until learning objective complete or maximum learning time reaches
Fig. 14.
Generate the initial repertoire of high-level reference neurons mi (i: memory bit number) Repeat Evaluate the performance of each high-level reference neuron (The fitness of a reference neuron is determined by the performance of the cytoskeletal neurons that it selects.) Select three best-performing high-level reference neurons Copy mx to my (x: best-performing ref. neurons; y: lesser-performing ref. neurons) Mutate my (y: lesser-performing ref. neurons) Until learning objective complete or maximum learning time reaches
Fig. 15.
30
6. Conclusions Evolution is the essence of biological systems for high adaptability. are designed to be effectively programmable. Digital machines
biologically inspired neuromolecular architecture that attempts to capture certain biological information processing features. captured in this architecture. programs. Evolutionary adaptability is one of the significant features
time-consuming, we demonstrated a hardware design of this architecture using conventional digital circuits. Our ultimate goal is to build actual hardware (or better,
molecularware/neuromolecularware) that is natural to the biological processing mode. Adaptability is a very broad term. It might be defined as the capacity to continue to Generalization can be said as a
patterns in a natural way in accordance with some underlying structural or functional principles [8]. Previous experimental results [4,6,8] demonstrated that this system exhibits some degrees of effective generalization capability in which intraneuronal dynamics plays a significant role. However, this was still a very limited approach to generalization since high
level cognitive processes were not taken into account. In this system, a cytoskeletal neuron with a particular integrative dynamics and readout enzyme distribution will recognize some families of input patterns (i.e., it will recognize a family of input patterns that are variant in space and time). The input patterns recognized by
a cytoskeletal neuron will be generalized in a more selective way than a simple threshold neuron. Furthermore, the manner of generalization can be altered by changing its integrative This capability is advantageous for handling problems with environmental Cytoskeleltal neurons may be trained to recognize sets of input patterns through
dynamics. ambiguity.
31
pattern processing specificity since every pattern will trigger its firing.
cytoskeletal neuron is trained to recognize only a single pattern, it will be overly specific and rigid. In this case it will lose its capability for recognizing input patterns that are variable in It is important to strike a balance between these two extremes.
The ability to generalize is clearly necessary for dealing with variable or noisy environments. The problem is that dynamics that allow for effective generalization of some We
classes of environments necessarily preclude effective generalization for other classes. call this the interference problem. evolutionary learning algorithm.
suitable neuronal architecture, including both the internal structures and neuronal dynamics, and memory mechanisms that link neurons into coherent groups. It is essential that the
architecture allow for the evolution of a repertoire of special purpose neurons with dynamics that have different generalization properties and a linking mechanism that allows for orchestration of this repertoire. Our model opens up such a rich evolutionary possibility. We
The model is clearly much more complex than conventional connectionist models.
can regard it on the one hand as a tool for examining the nature of biological processing itself, and on the other as a tool that is capable of yielding practical benefits. attempt to develop a neuromolecular architecture on digital circuits. effective) design of this hardware is still under investigation. This paper is our first The detailed (or more
of this architecture on digital circuits would allow the system to perform on a real-time basis. It would indeed expand the application domains. Future work includes widening the
dynamic capabilities of the neurons, utilizing the associative memory capability in combination with evolutionary learning, porting evolved neurons with useful pattern processing capabilities to special silicon hardware, using the system as an architectural
32
paradigm for emerging molecular electronic technologies, and employing the system as a vehicle for obtaining a clearer understanding of the role of intraneuronal mechanisms in brain functions.
Acknowledgment This paper is dedicated to the memory of Professor Michael Conrad, a pioneer in the field of molecular computing.
References 1. H.J. Bremermann, Optimization through evolution and recombination, in: M.C. Yovits, G.T. Jacobi and G.D. Goldstein, eds., Self-Organizing Systems (Spartan Books, Washington, D.C., 1962) 93-106. 2. 3. 4. F.L. Carter, ed., Molecular Electronic Devices (Marcel Dekker, New York, 1982). F.L. Carter, ed., Molecular Electronic Devices II (Marcel Dekker, New York, 1987). J.-C. Chen, Problem solving with a perpetual evolutionary learning architecture, Applied Intelligence 8, 1 (1998) 53-71. 5. J.-C. Chen, Data differentiation and parameter analysis of a chronic hepatitis B database with an artificial neuromolecular system, BioSystems 57 (2000) 23-36. 6. J.-C. Chen and M. Conrad, Learning synergy in a multilevel neuronal architecture, BioSystems 32 (1994) 111-142. 7. J.-C. Chen and M. Conrad, A multilevel neuromolecular architecture that uses the extradimensional bypass principle to facilitate evolutionary learning, Physica D. 75 (1994) 417-437. 8. J.-C. Chen and M. Conrad, Pattern categorization and generalization with a virtual neuromolecular architecture, Neural Networks 10, 1 (1997) 111-123.
33
9.
J.-C. Chen, and M. Conrad, Evolutionary learning with a neuromolecular architecture: a biologically motivated approach to computational adaptability, Soft Computing 1, 1 (1997) 19-34.
10. M. Conrad, Information processing in molecular systems, Currents in Modern Biology (now BioSystems) 5 (1972) 1-14. 11. M. Conrad, Evolutionary learning circuits, J. Theor. Biol. 46 (1974) 167-188. 12. M. Conrad, Molecular information structures in the brain, J. Neurosci. Res. 2 (1976) 233-254. 13. M. Conrad, Complementary molecular models of learning and memory, BioSystems 8 (1976) 119-138. 14. M. Conrad, Principle of superposition-free memory, J. Theor. Biol. 67 (1977) 213-219. 15. M. Conrad, Adaptability: The Significance of Variability from Molecule to Ecosystem, (Plenum Press, New York, 1983). 16. M. Conrad, On design principles for a molecular computer, Commun. ACM 28 (1985) 464-480. 17. M. Conrad, The lure of molecular computing, IEEE Spectrum 23 (1986) 55-60. 18. M. Conrad, Molecular computing: a synthetic approach to brain theory, in: J. Casti and A. Karlqvist, eds., Real Brains, Artificial Minds (North Holland, New York, 1987) 197 226. 19. M. Conrad, The brain-machine disanalogy, BioSystems 22 (1989) 197-213. 20. M. Conrad, Molecular computing, in: M.C. Yovits, ed., Advances in Computers 31 (Academic Press, San Diego, 1990) 235-324. 21. M. Conrad, Electronic instabilities in biological information processing, in: P.I. Lazarev, ed., Molecular Electronics (Kluwer Academic Publishers, Amsterdam, 1991) 41-50. 22. M. Conrad, Integrated precursor architecture as a framework for molecular computer design, Microelect. J. 24 (1993) 263-285.
34
23. M. Conrad, R.R. Kampfner, and K.G. Kirby, Neuronal dynamics and evolutionary learning, in: M. Kochen and H. Hastings, eds., Advances in Cognitive Science: Steps Toward Convergence 104 (Westview Press, Boulder, CO, 1988) 169-189. 24. M. Conrad, R.R. Kampfner, K.G. Kirby, E.N. Rizki, G. Schleis, R. Smalz, and R. Trenary, Towards an artificial brain, BioSystems 23 (1989) 175-218. 25. H. d G r, n rf i ba : T cm e a sA a ic l r n A Rs a -brain project aims to build/evolve an artificial i tia i brain with a million neural net modules inside a trillion cell cellular automata machine, New Generation Computing Journal 12, 2 (1994). 26. H. de Garis, LSL evolvable hardware workshop report, ATR, Japan, Tech. Rep. (Oct. 1995). 27. H. de Garis, Review of proceedings of the first NASA/Dod workshop on evolvable hardware, IEEE Trans. Evol. Comput. 3, 4 (1999) 304-306. 28. G.I. Drummond, Cyclic nucleotides in the nervous system, in: P. Greengard and G.A. Robinson, eds., Advances in Cyclic Nucleotide Research (1983) 373-494. 29. L. Fogel, A. Owens, and M. Walsh, Artificial Intelligence through Simulated Evolution (Wiley, New York, 1966). 30. A.S. Fraser, Simulation of genetic systems by automatic digital computers, Australian J. of Biol. Sci. 10 (1957) 484-491. 31. H. Frhlich, Evidence for coherent excitation in biological systems, Int. J. Quantum Chem. 23 (1983) 1589-1595. 32. K. Fukushima, S. Miyake, and T. Ito, Neocognitron: a neural network model for a mechanism of visual pattern recognition, IEEE Trans. Syst., Man, Cybern. 13 (1983) 826-834. 33. K. Fukushima, Neocognitron: a hierarchical neural network capable of visual pattern recognition, Neural Networks 1 (1988) 119-130.
35
34. L.M. Griffith and T.D. Pollard, Evidence for actin filament-microtubule interaction mediated by microtubule-associated proteins, J. Cell Biol. 78 (1978) 958-965. 35. L.M. Griffith and T.D. Pollard, The interaction of actin filaments with microtubules and microtubule-associated proteins, J. Biol. Chem. 257 (1982) 9143-9151. 36. S. Grossberg, How does a brain build a cognitive code, Psychological Review 87 (1980) 1-51. 37. S.R. Hameroff, Ultimate Computing (North-Holland, Amsterdam, 1987). 38. S.R. Hameroff, J.E. Dayhoff, R. Lahoz-Beltra, A. Samsonovich, and S. Rasmussen, Conformational automata in the cytoskeleton: models for molecular computation, Computer 25, 11 (1992) 30-39. 39. T. Higuchi, M. Iwata, D. Keymeulen, H. Sakanashi, M. Murakawa, I. Kajitani, E. Takahashi, K. Toda, M. Salami, N. Kajihara, and N. Otsu, Real-world applications of analog and digital evolvable hardware, IEEE Trans. Evol. Comput. 3, 3 (1999) 220-235. 40. T. Higuchi and N. Kajihara, Evolvable hardware chips for industrial applications, Commun. ACM 42, 4 (1999) 60-66. 41. J. Holland, Adaptation in Natural and Artificial Systems (University of Michigan Press, Ann Arbor, MI., 1975). 42. F.T. Hong, Intelligent materials and intelligent microstructures in photobiology, Nanobiology 1 (1992) 39-60. 43. F.T. Hong, Bacteriorhodopsin as an intelligent material: a nontechnical summary, MEBC (1992) 13-17. 44. F.T. Hong, Biomolecular computing, in: R.A. Meyeres, ed., Molecular Biology and Biotechnology: A Comprehensive Desk Reference (Weinheim and Cambridge, New York, 1995) 194-197. 45. J. Hopfield, Neural networks and physical systems with emergent collective
36
computational abilities, Proc. Nat. Acad. Sci. 79 (1982) 2554-2558. 46. R. Kampfner and M. Conrad, Sequential behavior and stability properties of enzymatic neuron networks, Bull. Math. Biol. 45 (1983) 969-980. 47. K. Kirby and M. Conrad, Intraneuronal dynamics as a substrate for evolutionary learning, Physica D. 22 (1986) 205-215. 48. F.H. Kirkpatrick, New models of cellular control: membrane cytoskeletons, membrane curvature potential, and possible interactions, BioSystems 11 (1979) 85-92. 49. T. Kohonen, A principle of neural associative memory, Neuroscience 2 (1977) 1065-1076. 50. E.A. Liberman, S.V. Minina, and K.V. Golubtsov, The study of the metabolic synapse II: comparison of cyclic 3',5'-AMP and cyclic 3',5'-GMP effects, Biophysics 22 (1975) 75-81. 51. E.A. Liberman, S.V. Minina, N.E. Shklovsky-Kordy, and M. Conrad, Microinjection of cyclic nucleotides provides evidence for a diffusional mechanism of intraneuronal control, BioSystems 15 (1982) 127-132. 52. E.A. Liberman, S.V. Minina, N.E. Shklovsky-Kordy, and M. Conrad, Change of mechanical parameters as a possible means for information processing by the neuron (in Russian), Biophysics 27 (1982) 863-870. 53. E.A. Liberman, S.V. Minina, O.L. Mjakotina, N.E. Shklovsky-Kordy, and M. Conrad, Neuron generator potentials evoked by intracellular injection of cyclic nucleotides and mechanical distension, Brain Res. 338 (1985) 33-44. 54. G. Matsumoto, S. Tsukita, and T. Arai, Organization of the axonal cytoskeleton: differentiation of the microtubule and actin filament arrays, in: F.D. Warner and J.R. McIntosh, eds., Kinesin, Dynein, Cell Movement, Microtubule Dynamics (Alan R. Liss, New York, 1989) 335-356.
37
55. M. Murakawa, S. Yoshizawa, I. Kajitani, X. Yao, N. Kajihara, M. Iwata, and T. Higuchi, The GRD chip: genetic reconfiguration of DSPs for neural network processing, IEEE Trans. Comput. 48, 6 (1999) 628-639. 56. T.D. Pollard, S.C. Selden, and P. Maupin, Interaction of actin filaments with microtubules, J. Cell Biol. 99 (1984) 33-37. 57. I. Rechenberg, Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution (Frommann-Holzboog, Stuttgart, Germany, 1973). 58. G.N. Reeke and G.M. Edelman, Selective networks and recognition automata, In: M. Kochen and H.M. Hastings, eds., Advances in Cognitive Science (Westview Press, Boulder, CO., 1988) 50-71. 59. H.P. Schwefel, Numerical Optimization of Computer Models (Wiley, Chichester, 1981) 60. S.C. Selden and T.D. Pollard, Phosphorylation of microtubule-associated proteins regulates their interaction with actin filaments, J. Biol. Chem. 258 (1983) 7064-7071. 61. M. Sipper, D. Mange, and E. Sanchez, Quo Vadis evolvable hardware, Commun. ACM 42, 4 (1999) 50-56. 62. O. Skalli and R.D. Goldman, Recent insights into the assembly, dynamics, and functions of intermediate filament networks, Cell Motil. Cytoskel. 19 (1991) 67-79. 63. R. Smalz and M. Conrad, A credit apportionment algorithm for evolutionary learning with neural networks, in: A.V. Holden and V.J. Kryukov, eds., Neurocomputers and Attention II: Connectionism and Neurocomputers, Proceedings in Nonlinear Science (Manchester University Press, Manchester, 1991) 663-673. 64. R. Smalz and M. Conrad, Combining evolution with credit apportionment: a new learning algorithm for neural nets, Neural Networks 7 (1994) 341-351. 65. P. Spiessens and J. Torreele, Massively parallel evolution of recurrent networks: an approach to temporal processing, In: F.J. Varela and P. Bourgnine, eds., Neurocomputers
38
and Attention II: Connectionism and Neurocomputers (Manchester University Press, Manchester, UK, 1991) 663-673. 66. P. Stair, Cytoplasmic matrix: old and new questions, J. Cell Biol. 99 (1984) 235-238. 67. P.M. Steinert, J.C.R. Jones, and R.D. Goldman, Intermediate filaments, J. Cell Biol. 99 (1984) 22-27. 68. A. Tamulis, S. Janusonis, and S. Bazan, Selection rules for self-formation in the molecular nanotechnology, Makromol. Chem., Marcromol. Symp. 46 (1991) 181-185. 69. A. Tamulis and L. Bazhan, Quantum chemical investigations of photoactive supermolecules and supramolecules, their self-assembly and design of molecular devices, Synthetic Metals (1993) 4685-4690. 70. A. Tamulis, E. Stumbrys, V. Tamulis, and J. Tamuliene, Quantum mechanical investigations of photoactive molecules, supermolecules, supramolecules and design of basic elements of molecular computers, in: F. Kajzar and V.M. Agranovich, eds., Photoactive Organic Materials (Kluwer Acacdemic Publishers, Netherlands, 1996) 53-66. 71. A.M. Turing, Computing machinery and intelligence, Mind 59 (1950) 433-460. 72. R.B. Vallee, G.S. Bloom, and W.E. Theurkauf, Microtubule-associated proteins: subunits of the cytomatrix, J. Cell Biol. 99 (1984) 38-44. 73. P. Werbos, Beyond regression: new tools for prediction and analysis in the behavioral sciences, Ph.D. Thesis, Harvard University (1974). 74. P. Werbos, Backpropagation and neurocontrol: a review and prospectus, in: Proc. Int. Joint Conf. Neural Networks (1989) 209-216. 75. D. Whitley and T. Hanson, Optimizing neural networks using fast, more accurate genetic search, in: Proc. of the 3rd Int. Conf. Genetic Algorithms (Kaufmann, Palo Alto, CA, 1989) 157-255. 76. S. Wolfram, Cellular automata as models of complexity, Nature 311 (1984) 419-424.
39
77. X. Yao, A review of evolutionary artificial neural networks, Int. J. Intell. Syst. 8, 4 (1993) 539-567. 78. X. Yao, Evolutionary artificial neural networks, Int. J. Neural Systems 4, 3 (1993) 203-222. 79. X. Yao, Following the path of evolvable hardware, Commun. ACM 42, 4 (1999) 47-49. 80. X. Yao and T. Higuchi, Promises and challenges of evolvable hardware, IEEE Trans. Syst., Man, Cybern. 29, 1 (1999) 87 97. 81. X. Yao and Y. Liu, A new evolutionary system for evolving artificial neural networks, IEEE Trans. Neural Networks 8, 3 (1997) 694 713. 82. X. Yao and Y. Liu, Making use of population information in evolutionary artificial neural networks, IEEE Trans. Syst., Man, Cybern. 28, 3 (1998) 417-425.
40