Sie sind auf Seite 1von 18

Skip to Main content

 Journals & Books

Create accountSign in
Neural Networks
Neural networks are a class of algorithms loosely modelled on connections
between neurons in the brain [30], while convolutional neural networks (a
highly successful neural network architecture) are inspired by experiments
performed on neurons in the cat's visual cortex [31–33].
From: Progress in Medicinal Chemistry, 2018
Related terms:
 Hippocampus
 Neurosciences
 Perception
 Protein
 Gene
 Memory
View all Topics
Download as PDF
Set alert
About this page
Learn more about Neural Networks
Neural Networks
J.F. Pagel, Philip Kirshtein, in Machine Dreaming and Consciousness, 2017
Artificial Neural Networks
Neural logic computes results with real numbers, the numbers that we
routinely use in arithmetic and counting, as opposed to “crisp” binary ones and
zeros. Specialized hardware and software have been created to implement
neural probabilistic truth/not-truth, or fire/don’t fire logic. This multilevel
approach to fuzzy logic can be used to measure variables as to the degree of
a characteristic. Fuzzy state machines act on the multilevel inputs by a system
of rules-based inference. Using a set of multiple variables and inputs acted on
by multiple rules can produce multiple results with different probabilities. Since
entry into and exit from any specific fuzzy state is probabilistic, there exists the
case for autonomous creation of new states based on alternate state
transition rules to a new set of solutions. Using historical data in a more
mutable and adaptable form (sometimes called soft fuzzy logic) can allow the
system to change rules based on previous data, and to then use that changed
approach to create new states.5
Feedback is an intrinsic component of every physiologic system. A dynamic
feedback system makes it possible for set in-points to be attained in response
to changing conditions. For machine learning to be possible, feedback
systems are required. The greater the number of feedback loops, the more
precise the attainable response, particularly in fuzzy multivalent
systems.3 Feedback allows for the processing of scrambled, fragmented, and
high-interference signals, and for the processing of data packets in which
basic code data have been dropped or lost.
Artificial neural networks (AANs) can be used to perform probabilistic
functions in either a hardware or software analogue. These systems are
designed to operate in the same manner in which the neurons and synapses
of the brain are theorized to operate. The architecture of neural connections
can be described as a combinational feedforward network. In order to insert
context, as well as to provide the possibility for feedback self-correction, some
networks add a form of state feedback called back propagation. Once a
feedback system is in place, the possibility of machine learning is present. In
the process of machine learning, system behavior and processing are altered
based on the degree of approximation achieved for any specified goal. In
today’s systems, the human programmer sets the specified goals. Applied
system feedback, however, allows the AI system to develop alternative
approaches to attaining the set goals.
The implementation of feedback systems can be on a real-time or a pseudo-
real-time basis. Pseudo-real-time is a programmed process that provides a
timeline-independent multiplexed environment in which to implement a neural
and synaptic connection network. An artificially produced machine-style
implementation of synapses and neurons (neural network) can be set up to
operate in the present state, next state, or via synaptic weightings
implemented as matrices. Computer programming languages and systems
have been designed to facilitate the development of machine learning for
artificial neural network systems. The Python programming language modules
Theano, numpy, neurolab, and scipy provide a framework for these types of
matrix operations. The Theano language provides an environment for
performing symbolic operations on symbolic datasets to provide scalar results,
useful when speed is essential.
Artificial neurons operate by summing inputs (s1,s2,s3) individually scaled by
weight factors (w1,w2,w3) and processing that sum with a nonlinear activation
function (af), most often approximating the logistic-function: 1/(1+exp(−x))
which returns a real value in the range (0,1) (Fig. 6.1). Single artificial neurons
can be envisioned as single combinatorial operators. More complex
operations such as exclusive-or mappings require additional levels of neurons
just as they require additional levels of crisp logic. Processes such as look-up-
table logic can be used to implement multiple gates in one table by mapping
all possible input combinations. Multiple-multiple levels of logic can be
implemented in one table.

Sign in to download full-size image


Figure 6.1. Artificial neuron.
Supervised neural networks are programmed and trained to infer a set goal or
solution based on sets of inputs and desired outputs. A solution is developed
based on the result of a specific set of artificial neurons “firing” and being
propagated to the output. From the programmer’s perspective, there are
typical problems that limit the capacities of such systems. These include noise
(the incorporation of incorrect data points leading to a decline in performance),
the need for invariance of a solution with respect to variance in multiple inputs,
and the fact that these systems include a far smaller number of neural
interconnections than an organic brain. These limitations of neural-network
designed systems can produce multiple solutions from the same given set of
inputs. Incomplete training sets are more often to lead to indecisive outputs.7
An example of an artificial neuron with training written in Python is presented
in Table 6.1. Plots of training action are also presented. Note that not all
inputs result in well-defined outputs. For this example, Fig. 6.2 summarizes
the functionality of such a system. There are eight possible inputs, only seven
of which result in definitive outputs, while only four are defined in training. The
three definitive outputs resulting from untrained input conditions can be
envisioned as hallucinatory since the answers provided are hypothetical and
nonapplicable to the real-world situation, yet are based on the same given set
of data input. The nondeterministic output is indecisive and most often
noninterpretable. Propagated to successive layers of neurons, the results of
these nontrained inputs result in an additional level of unintended operation
leading to unexpected results. The nondeterministic input condition [0,1,0],
combined with noise on any of the inputs, will result in a randomness of that
output as well.
Table 6.1. Simple neuron training and results

Input vector Trained output Functional output


[0,0,0] [0.99995733]
[0,0,1] [5.91174766e-13]
[0,1,0] [0.50101207]
[0,1,1] [0] [2.53267443e-17]
[1,0,0] [1] [1.]
[1,0,1] [0.00673341]
[1,1,0] [1] [1.]
Input vector Trained output Functional output
[1,1,1] [0] [2.90423911e-07]

Sign in to download full-size image


Figure 6.2. Feedforward neural network.
Figs. 6.1 and 6.2 are neurolab neural networks that provide insight into
multilayer neural networks. Their training and implications of undertraining are
demonstrated. The corresponding console outputs for these examples show
results for both crisp and real numbers.
Recurrent and reentrant neural networks (Fig. 6.3) implement connections
from lower layers of neuron into upper layers of the network of neurons. Such
a configuration can be used to provide a temporal/sequential dimension to
neural network processing. From a control theory point of view, these
connections can be envisioned as feedback. While feedback is most often a
positive process, particularly in fuzzy processing systems, feedback loops can
also contribute to indeterminate or hallucinatory results. Feedback allows for
the possibility of creating multiple poles and zeros within the neural network
solution space. Poles and zeros potentially allow the conditions of oscillation
and latch-up, both unintended and undesirable, and both leading to incorrect
or unusable solution results attained from the same data analysis.

Sign in to download full-size image


Figure 6.3. Partially recurrent neural network connections.
ANNs are most often implemented and represented by matrices. For a
1×n matrix, N represents inputs, and I artificial neurons. Multiplied by a weight
matrix for each neuron input, the summation becomes a sum of weighted
outputs from all other neurons and inputs. An activation function is then used
to calculate individual neuron outputs. For a state machine form with neurons
and inputs represented by matrix N, weighting by matrix W, and outputs by O
with an output definition matrix of U:
N(t+1)=af(N(t)W)O(t+1)=af(N(t)U)
Neural networks can be “trained” by initializing the weight matrix W with either
random values or with a set of suggested values and comparing the output O
(I). Target values are established for a set of input values, and the desired
output is derived based on individual neuron inputs. This value is calculated
and W is iteratively adjusted to achieve the desired output O. As networks are
made deeper and wider, and as the internal neurons are less defined and
increasingly nonspecific, the ability to train becomes more difficult. Many
repeated iterations are required for training a small network. Large networks
require an exponentially greater number of training iterations and a complete
set of training data to avoid undesirable operation results (such as identifying
a cat as a dog). With the inclusion of temporal feedback to add time-based
relevance to inputs, susceptibility to unintended operation becomes even
more likely.
ANNs can clearly be taught. Learning takes place in a conceptually similar
manner to the way that neural networks are trained to retain memories in the
human CNS. Repeated and stronger interactions can be expressed and
integrated based on probability factors, sometimes called signal weight. A
network can be trained to assign weights to possible connections.
Programmed rules can be set up establishing the parameters and limits for
particular connections. An ANN can be trained in this manner to recognize
errors, minimize rarely used synaptic paths, and adjust its synaptic weights
accordingly. This process is utilized by current systems active in computer
games, stock market analysis, and image identification in order to create
systems that are inherently self-improving.8
Deep learning networks are neural networks that are extremely large in the
number of implemented levels. Such networks require large computing
resources, and a great deal of data for learning. These learning data must be
presented as a defined data set that has conditions/paths goals, values, and
positive/negative rewards programmed and identified. Given the instabilities
previously described in implementing recursive networks, without extensive
testing and complete definition of all possible input sequences including errant
and noisy variants, unintended responses are commonly produced and often
expected.
Read full chapter
NDT Techniques: Signal and Image Processing
S.S. Udpa, L. Udpa, in Encyclopedia of Materials: Science and Technology, 2001
(b) Neural networks
Neural networks represent an attempt to mimic the biological nervous system
with respect to both architecture as well as information processing strategies.
The network consists of simple processing elements that are interconnected
via weights. The network is first trained using an appropriate learning
algorithm for the estimation of interconnectionweights. Once the network is
trained, unknown test signals can be classified. The class of neural
networksused most often for classification tasks is
themultilayer perceptron network. Neural networks have been used with
considerable success in the classification of eddy current and ultrasonic NDE
signals (Udpa and Udpa 1991) (see NDT: Role of Artificial Intelligence and
Neural Networks).
Read full chapter
Neural computing in pharmaceutical products and process
development
Jelena Djuris, ... Zorica Djuric, in Computer-Aided Applications in Pharmaceutical
Technology, 2013
Other types of networks
The Modular Neural Network (MNN) is a neural network that has two main
branches. During the training process, branches compete against each other,
resulting in a system that is capable of better generalization. Other types
of neural networksinclude: Probabilistic Neural Networks (PNN), Learning
Vector Quantization Networks (LVQ), Cascade Correlation Networks (CCN),
Functional Link Networks (FLN), Kohonen Networks (KN), Hopfield Neural
Network (HNN), Gram-Chalier Networks (GCN), Hebb Networks
(HN), AdalineNetworks (AN), Hetero-associative Networks (HN), Hybrid
Networks (HN), Holographic Associative Memory (HAM), Spiking Neural
Networks (SNN), Cascading Neural Networks (CNN), Compositional Pattern-
producing Neural Networks (CPPNN), etc. (Zaknich, 2003; Kollias et al., 2006;
Nisbet et al., 2009).
Read full chapter
Modern Methods in Natural Products Chemistry
Yi-Ping Phoebe Chen, ... Paolo Carloni, inComprehensive Natural Products II,
2010
9.15.3.2.2 Neural networks
Neural networks are parallel and distributed information processing systems
that are inspired and derived from biological learning systems such as
human brains. The architecture of neural networks consists of a network of
nonlinear information processing elements that are normally arranged in
layers and executed in parallel. This layered arrangement for the network is
referred to as thetopology of a neural network. These nonlinear information
processing elements in the network are defined as neurons, and the
interconnections between these neurons in the network are calledsynapse or
weights. A learning algorithm must be used to train a neural network so that it
can process information in a useful and meaningful way.
Most neural networks are trained with supervised training algorithms. This
means that the desired output must be provided for each input used in the
training. In other words, both the inputs and the outputs are known. In the
supervised training, a network processes the inputs and compares its actual
outputs against the expected outputs. Errors are then propagated back
through the network, and the weights that control the network are adjusted
with respect to the errors propagated back. This process is repeated until the
errors are minimized; it means that the same set of data is processed many
times as the weights between the layers of the network are refined during the
training of the network. This supervised learning algorithm is often referred to
as a back-propagation algorithm, which is useful for training multiple-layer
preceptron neural networks (MLPs). Figure 6 demonstrates the architecture
for a supervised neural network, which includes three layers, namely, input
layer, output layer, and a hidden middle layer.
Sign in to download full-size image
Figure 6. A sample structure of supervised neural network.
Neural networks are used in a wide variety of applications in pattern
classification, language processing, complex systems modeling, control,
optimization, and prediction.92 Neural networks have also been actively used in
many bioinformaticsapplications such as DNA sequence prediction,protein
secondary structure prediction, gene expression profiles classification, and
analysis of gene expression patterns.93 Neural network has been applied
widely in biology since the 1980s.94 For example, Stormo et al.95 reported
prediction of thetranslation initiation sites in DNA sequences. Baldi and
Brunak96 used applications in biology to explain the theory of neural networks.
The concepts of neural network used in pattern classification and signal
processing have been successfully applied in bioinformatics. Wu et al.93,97–
99 applied the neural networks to classify protein sequences. Wang et

al.100 applied neural networks to protein sequence classification by extracting


features from the protein data and using them in combination with the
Bayesian neural network (BNN). Qian and Sejnowski101 predicted the protein
secondary structure using neural networks. Neural networks have also been
applied to the analysis of gene expression patterns as an alternative to
hierarchical cluster methods.75,100,102,103 Narayanan et al.104demonstrated the
application of the single layer neural networks to analyze gene expression.
Besides SVMs and neural networks, there are also machine learning methods
for gene selection such as ‘discriminate analysis’, which distinguishes a
selected dataset from the rest of the data, and ‘k-nearest neighbor (KNN)
algorithm’, which is based on a distance function for pairs of observations,
such as the Euclidean distance. In this classification hypothesis, k nearest
neighbors of a set of training data is computed. The similarities of one sample
of testing data to the KNN are then aggregated according to the class of the
neighbors, and the testing sample is assigned to the most similar class. One
of the advantages of KNN is that it is well suited for multimodal classes as its
classification decision is based on a small neighborhood of similar objects. So,
even if the target class consists of objects whose independent variables have
different characteristics for different subsets (multimodal), it can still lead to
good accuracy. A major drawback of the similarity measure used in KNN is
that it uses all features equally in computing similarities. This can lead to poor
similarity measures and classification errors, when only a small subset of the
features is useful for classification. Li et al.105 successfully used an approach
that combines a genetic algorithm (GA) and the KNN method to identify genes
that can jointly discriminate between different classes of samples.
Read full chapter
Neural Networks: Biological Models and Applications
F.H. Guenther, in International Encyclopedia of the Social & Behavioral Sciences,
2001
5 Pattern Recognition Applications
Neural networks are capable of learning complicated nonlinear relationships
from sets of training examples. This property makes them well suited to
pattern recognition problems involving the detection of complicated trends in
high-dimensional datasets. One such problem domain is the detection of
medical abnormalities from physiological measures. Neural networks have
been applied to problems such as the detection of cardiac abnormalities from
electrocardiograms and breast cancer from mammograms, and some neural
network diagnostic systems have proven capable of exceeding the diagnostic
abilities of expert physicians. Supervised learning networks have been applied
to a number of other pattern recognition problems, including visual object
recognition, speech recognition, handwritten character recognition, stock
market trend detection, and scent detection (e.g., Carpenter and
Grossberg1991). For further reading on neural networks and their biological
bases, see Anderson et al. (1988), Arbib (1995), and Kandel et al. (2000).
Read full chapter
Linear Algebra for Neural Networks
H. Abdi, in International Encyclopedia of the Social & Behavioral Sciences, 2001
Neural networks are quantitative models which learn to associate input and
output patterns adaptively with the use of learning algorithms. We expose four
main concepts from linear algebra which are essential for analyzing these
models: (a) the projection of a vector, (b) the eigen and singular value
decomposition, (c) the gradient vector and Hessian matrix of a vector function,
and (d) the Taylor expansion of a vector function. We illustrate these concepts
by the analysis of the Hebbian and Widrow–Hoff rules and some basic neural
networkarchitectures (i.e., the linear autoassociator, the linear
heteroassociator, and the error backpropagation network). We show also
that neural networks are equivalent to iterative versions of standard statistical
and optimization models such asmultiple regression analysis and principal
component analysis.
Read full chapter
Computer Analysis of Nuclear Cardiology Procedures
ERNEST V. GARCIA, ... RUSSELL D. FOLKS, inEmission Tomography, 2004
2 Neural Networks
Neural networks have been developed as an attempt to simulate the highly
connected biological system found in the brain through the use of computer
hardware and software. In the brain, a neuron receives input from many
different sources. It integrates all these inputs and fires (sending a pulse down
the nerve to other connected neurons) if the result is greater than a set
threshold. In the same way, a neural network has nodes (the equivalent of a
neuron) that are interconnected and receive input from other nodes. Each
node sums or integrates its inputs and then uses a linear or nonlinear transfer
function to determine whether the node should fire. A neural network can be
arranged in many different ways; for example, it can have one or more layers
of nodes, it can be fully connected(where every node is connected to every
other node), or it can be partially connected. Also, it can have feed-forward
processing (in which processing only travels one direction) or it can have
feedback processing (in which processing travels both ways).
Another important aspect of neural networks is their ability to learn based on
input patterns. A neural network can be trained in either a supervised or
unsupervised mode. In the supervised mode, the network is trained by
presenting it with input and the desired output. The error between the output
of the network and the desired output is used to adjust the weights of the
inputs of all the nodes in such a way that the desired output is achieved. This
is repeated for many input sets of training data and for multiple cycles per
training set. Once the algorithm has converged (i.e., once the weights
change very little in response to training), the network can be tested with
prospective data. Unsupervised training is similar to supervised training
except that the output is not provided during learning. Unsupervised training is
useful when the structure of the data is unknown. The main advantage of a
neural network is its ability to learn directly from training data rather than from
rules provided by an expert. However, if the training data are not complete,
the network may not give reliable answers.
Neural networks have been used by three different groups in nuclear
cardiology to identify which coronary arteries are expected to have stenotic
lesions for a specific hypoperfused distribution (Hamilton et al., 1995). These
methods differ in the number of input and output nodes used. The number of
nodes that may be used (i.e., the complexity of the model) depends on the
amount of available training data The output of the neural network encodes
the information it is to report and can be as simple as a single node signifying
that there is a lesion present in the myocardium.
Read full chapter
SPECT Processing, Quantification, and Display
Tracy L. Faber, ... Ernest V. Garcia, in Clinical Nuclear Cardiology (Fourth Edition),
2010
Neural Networks
Neural networks have been developed as an attempt to simulate the highly
connected biological system found in the brain through the use of computer
hardware and/or software. In the brain, a neuron receives input from many
different sources. It integrates all of these inputs and “fires” (sending a pulse
down the nerve to other connected neurons) if the result is greater than a set
threshold. In the same way, a neural network has nodes (the equivalent of a
neuron) that are interconnected and receive input from other nodes. Each
node sums or integrates its inputs and then uses a linear or nonlinear transfer
function to determine if the node should “fire.” A neural network can be
arranged in many different ways; for example, it can have one or more layers
of nodes, it can be fully connected (where every node is connected to every
other node), or it can be partially connected. Also, it can have feed-forward
processing (where processing only travels one direction), or it can have
feedback processing (where processing travels both ways).
Another important aspect of neural networks is their ability to “learn” based on
input patterns. A neural network can be trained in either a supervised or
unsupervised mode. In the supervised mode, the net is trained by presenting
it with an input and giving it the desired output. The error between the output
of the net and the desired output is then propagated backward through the
net, adjusting the weights of the inputs of all the nodes in such a way that the
desired output is achieved. This is repeated for many input sets of training
data and for multiple cycles per training set. Once the net has converged (i.e.,
theweights change very little for additional training sets or cycles), it can be
tested with prospective data. This kind of training paradigm is very useful for
finding patterns out of a known collection of patterns. Unsupervised training is
similar to supervised training, but instead of providing the net with the desired
output, it is free to find its own output. This type of training can be very useful
for finding patterns in data where there is no known set of existing patterns.
The main advantage of a neural network is the ability to solve a problem that
can be represented by some sort of training data without needing an expert.
However, if the training data are not complete, or if a problem is presented to
the network that it has not been trained to solve, it may not give reliable
answers.
With careful training, neural networks can provide a unique approach
to solving problems. For example use neural networks have already been
used by three different groups in nuclear cardiology to identify which
coronaries are expected to have stenotic lesions for a specific hypoperfused
distribution.45–47These methods vary on the number of input and output nodes
used. The more training data available, the better the possibility for more
nodes. The output of these systems can be as simple as a single node
signifying that there is a lesion present in themyocardium.
WeAidU is one example of an automatic interpretation neural network for
interpretation ofmyocardial perfusion scans.48 In a European multicenter trial of
this approach, perfusion studies from different hospitals along with the
physicians’ interpretations were transmitted to the neural network site where
the images were processed and the interpretations compared. Agreement
between hospitals varied between 74% and 92%.49
Read full chapter
Knowledge Discovery in Biomedical Data: Theory and
Methods
John H. Holmes, in Methods in Biomedical Informatics, 2014
7.4.3.5.2 Neural Networks
Neural networks are perhaps the oldest of the naturally inspired computing
approaches, being based on theories dating back to the late 19th century, but
have only fairly recently been applied todata mining. A neural network is a
semantic network, where a set of input/output units (“neurons”) are connected
to each other, and these connections are weighted relative to the importance
or strength of the link between one neuron and the next. These connections
are roughly analogous tosynapses in biological nervous systems. Neural
networks learn by adjusting the connection weights during training. Neurons
are all-or-none devices that “fire,” or transmit a message to the next
connected neuron, based on meeting or exceeding some threshold.
An early simulated neuron was the perceptron [118], which incorporates the
basis for the neural network. As shown in Figure 7.24, the perceptron takes
inputs (I) from the environment, such as a vector of features from a database.
Each feature has a specific value such as one would find in the database.
Thus, for a vector that includes the three features {Age, Height, Weight}, the
values for a given training case could be {32, 61, 120}.

Sign in to download full-size image


Figure 7.24. A simple perceptron. Each input is connected to the neuron, shown in gray. Each
connection has a weight, the value of which evolves over time, and is used to modify the input.
Weighted inputs are summed, and this sum determines the output of the neuron, which is a
classification (in this case, either 0 or 1).
These values are weighted by the connections (shown as “W” in the figure),
and these weights evolve over time during training. These weights represent
the knowledge model in the perception, and are sent to the perception where
they are added and the sum of the weights is evaluated to determine if a
threshold is exceeded. The threshold function determines the output of the
neuron based on the summed, weighted inputs. Based on the value of the
summed weights, the perceptron sends one message such as a decision or
classification back to the environment, represented in the figure as either 0 or
1. In the case of supervised learning, the output of the perceptron can be
compared to the known class of a training case, and based on the accuracy of
the output decision, the weights and the threshold will be adjusted to
strengthen or weaken the weights or the threshold, or both. Typically, weight
and threshold adjustments are made only when an error occurs in the output.
For example, weights can be adjusted at each time step, determined by the
exposure of the system to a training case, as follows:
wi(t+1)=wi(t)+Δwi(t),
where wi = weight of connection i
Δwi(t)=(D-O)Ii,
where D = known classification and O = classification output by the
perceptron.
Numerous extensions have been made to the perceptron model, nearly all of
which involve multiple neurons connected in layers, such as an input
(“sensory”) layer, an output (“effector”) layer, and one or more middle
(“hidden”) layers. The hidden layers build an internal model of the way input
patterns are related to the desired output, and as a result, this is where the
knowledge representation is implicit in this model—it is the synapses
(connectivity) that is the representation proper. In general, the larger the
number of hidden layers and the neurons they contain, the less error during
training. This type of architecture is used in the feed-forward, back-
propagation neural network. A schematic of a feed-forward, back-propagation
network is shown in Figure 7.25.
Sign in to download full-size image
Figure 7.25. A feed-forward, back-propagation neural network. In this example, the inputs
represent values (yes/no) of four of the features used to determine if a patient who has suffered
an animal bite needs to be treated prophylactically for rabies. Each connection is weighted and
these weights evolve during training. This simple model is similar to the perceptron in Figure
7.24, except that it includes a hidden layer and that is propagates error back to the input weights
during training. The hidden layer provides additional knowledge representation internal to the
system, and can provide improvements in classification performance.
This network uses the feed-forward approach of the perceptron, but includes
the ability to provide (propagate) feedback about the accuracy of its output to
the components. The pseudocode for a feed-forward, back-propagation neural
network is shown below:
Initialize weights
For each training case i
Present input i
Generate output Oi
Calculate error (Di −;Oi)
Do while output incorrect
For each layer j
Pass error back to each neuron n in layer j
Modify weight in each neuron n in layer j
EndDo
i=i+1
The hidden layer is the workhorse in this type of neural network. Its outputs,
whether fed forward or backward through the network, are determined by a
variety of different functions, a popular one of which is a logistic function:
Oj=11+e-Nj,
where Oj is the output of hidden layer j and
Nj=∑iWijOi+θj.
Nj is the sum of the weighted outputs plus some error term θ.
Neural networks pose several advantages for data mining. They have shown
excellent performance in many settings, particularly in prediction, where the
trained network can be used to identify the probability of a specific outcome,
given the value of an input feature vector for a given patient. In biomedical
settings, they are especially useful because of their resistance to noise in the
data. However, neural networks pose several challenges to those using them
for data mining. Most neural network software requires that the architecture be
specified in advance, so that the determination of how many hidden layers
(and how many neurons are in those layers) must be made in advance of
training. Second, there are many parameters that need to be set in a typical
neural network, and some of these are often opaque to a novice user. These
include settings that bias the connection weights as the network is trained.
Fourth, a neural network can take a long time to train, particularly for some
biomedical databases containing many variables, many records, and
numerous contradictions in the data. Finally, the knowledge contained in a
neural network is not easily translated to something that is readily
understandable, such as one would find with a regression model. However,
there has been considerable research into extracting the knowledge hidden in
neural networks, as noted in [88].
Neural networks have been used in biomedical data mining in such domains
as diagnosis [64,92,131], clinical prediction [29,41,102,108],
pharmacovigilance [14,134], drug adverse event detection [13,120,136],
personalized medicine [119]. Image classification [9,37],
and proteomics[26,89,114,142,151]. An excellent introduction to the use of
neural networks in mining biomedical data is offered in [150].
Read full chapter
Recent Advances in Parkinson’s Disease: Basic Research
Thomas V. Wiecki, Michael J. Frank, in Progress in Brain Research, 2010
Conclusion and outlook
Neural network models allow us to bridge the gap between the behavioural
and neuronal level. By integrating data from different domains into one
conglomerate model, we might start to see the ‘bigger picture’. For this
approach to be successful, it must stay close to empirical data and provide
concrete predictions which have to be tested experimentally to possibly refine
the model. These models pose an advantage to the classic box-and-arrow
diagrams: neural network models provide a more disciplined approach that is
grounded by mathematics and allows exploration of more complex dynamics
than are considered by static anatomical diagrams. As the research described
above has hopefully shown, this approach has already proven to be very
valuable in understanding the BG and associated disorders. Nevertheless, we
look forward to revising the models to incorporate other existing and future
biological data.
Read full chapter
 About ScienceDirect
 Remote access
 Shopping cart
 Advertise
 Contact and support
 Terms and conditions
 Privacy policy
We use cookies to help provide and enhance our service and tailor content
and ads. By continuing you agree to the use of cookies.
Copyright © 2019 Elsevier B.V. or its licensors or contributors. ScienceDirect
® is a registered trademark of Elsevier B.V.

Das könnte Ihnen auch gefallen