Beruflich Dokumente
Kultur Dokumente
Create accountSign in
Neural Networks
Neural networks are a class of algorithms loosely modelled on connections
between neurons in the brain [30], while convolutional neural networks (a
highly successful neural network architecture) are inspired by experiments
performed on neurons in the cat's visual cortex [31–33].
From: Progress in Medicinal Chemistry, 2018
Related terms:
Hippocampus
Neurosciences
Perception
Protein
Gene
Memory
View all Topics
Download as PDF
Set alert
About this page
Learn more about Neural Networks
Neural Networks
J.F. Pagel, Philip Kirshtein, in Machine Dreaming and Consciousness, 2017
Artificial Neural Networks
Neural logic computes results with real numbers, the numbers that we
routinely use in arithmetic and counting, as opposed to “crisp” binary ones and
zeros. Specialized hardware and software have been created to implement
neural probabilistic truth/not-truth, or fire/don’t fire logic. This multilevel
approach to fuzzy logic can be used to measure variables as to the degree of
a characteristic. Fuzzy state machines act on the multilevel inputs by a system
of rules-based inference. Using a set of multiple variables and inputs acted on
by multiple rules can produce multiple results with different probabilities. Since
entry into and exit from any specific fuzzy state is probabilistic, there exists the
case for autonomous creation of new states based on alternate state
transition rules to a new set of solutions. Using historical data in a more
mutable and adaptable form (sometimes called soft fuzzy logic) can allow the
system to change rules based on previous data, and to then use that changed
approach to create new states.5
Feedback is an intrinsic component of every physiologic system. A dynamic
feedback system makes it possible for set in-points to be attained in response
to changing conditions. For machine learning to be possible, feedback
systems are required. The greater the number of feedback loops, the more
precise the attainable response, particularly in fuzzy multivalent
systems.3 Feedback allows for the processing of scrambled, fragmented, and
high-interference signals, and for the processing of data packets in which
basic code data have been dropped or lost.
Artificial neural networks (AANs) can be used to perform probabilistic
functions in either a hardware or software analogue. These systems are
designed to operate in the same manner in which the neurons and synapses
of the brain are theorized to operate. The architecture of neural connections
can be described as a combinational feedforward network. In order to insert
context, as well as to provide the possibility for feedback self-correction, some
networks add a form of state feedback called back propagation. Once a
feedback system is in place, the possibility of machine learning is present. In
the process of machine learning, system behavior and processing are altered
based on the degree of approximation achieved for any specified goal. In
today’s systems, the human programmer sets the specified goals. Applied
system feedback, however, allows the AI system to develop alternative
approaches to attaining the set goals.
The implementation of feedback systems can be on a real-time or a pseudo-
real-time basis. Pseudo-real-time is a programmed process that provides a
timeline-independent multiplexed environment in which to implement a neural
and synaptic connection network. An artificially produced machine-style
implementation of synapses and neurons (neural network) can be set up to
operate in the present state, next state, or via synaptic weightings
implemented as matrices. Computer programming languages and systems
have been designed to facilitate the development of machine learning for
artificial neural network systems. The Python programming language modules
Theano, numpy, neurolab, and scipy provide a framework for these types of
matrix operations. The Theano language provides an environment for
performing symbolic operations on symbolic datasets to provide scalar results,
useful when speed is essential.
Artificial neurons operate by summing inputs (s1,s2,s3) individually scaled by
weight factors (w1,w2,w3) and processing that sum with a nonlinear activation
function (af), most often approximating the logistic-function: 1/(1+exp(−x))
which returns a real value in the range (0,1) (Fig. 6.1). Single artificial neurons
can be envisioned as single combinatorial operators. More complex
operations such as exclusive-or mappings require additional levels of neurons
just as they require additional levels of crisp logic. Processes such as look-up-
table logic can be used to implement multiple gates in one table by mapping
all possible input combinations. Multiple-multiple levels of logic can be
implemented in one table.