Sie sind auf Seite 1von 37

BIOLOGICAL NEURONS

The brain is principally composed of about 10 billion neurons, each


connected to about 10,000 other neurons. Each of the yellow blobs in the
picture above are neuronal cell bodies (soma), and the lines are the input
and output channels (dendrites and
axons) which connect them.
Each neuron
receives electrochemical inputs from
other neurons at the dendrites. If the
sum of these electrical inputs is
sufficiently powerful to activate the
neuron, it transmits an electrochemical
signal along the axon, and passes this
signal to the other neurons whose
dendrites are attached at any of the
axon terminals. These attached
neurons may then fire.
It is important to note that a neuron fires only if the total signal received
at the cell body exceeds a certain level. The neuron either fires or it
doesn't, there aren't different grades of firing.
So, our entire brain is composed of these interconnected electro-chemical
transmitting neurons. From a very large number of extremely simple
processing units (each performing a weighted sum of its inputs, and then
firing a binary signal if the total input exceeds a certain level) the brain
manages to perform extremely complex tasks.
This is the model on which artificial neural networks are based. Thus far,
artificial neural networks haven't even come close to modeling the
complexity of the brain, but they have shown to be good at problems which
are easy for a human but difficult for a traditional computer, such as image
recognition and predictions based on past knowledge.
The perceptron
The perceptron is a mathematical model of a biological neuron. While in
actual neurons the dendrite receives electrical signals from the axons of
other neurons, in the perceptron these electrical signals are represented as
numerical values. At the synapses between the dendrite and axons,
electrical signals are modulated in various amounts. This is also modeled in
the perceptron by multiplying each input value by a value called the weight.
An actual neuron fires an output signal only when the total strength of the
input signals exceed a certain threshold. We model this phenomenon in a
perceptron by calculating the weighted sum of the inputs to represent the
total strength of the input signals, and applying a step function on the sum
to determine its output. As in biological neural networks, this output is fed to
other perceptrons.

(Fig. 1) A biological neuron

(Fig. 2) An artificial neuron (perceptron)
There are a number of terminology commonly used for describing neural
networks. They are listed in the table below:
The input
vector
All the input values of each
perceptron are collectively called
the input vector of that perceptron.
The weight
vector
Similarly, all the weight values of
each perceptron are collectively
called the weight vector of that
perceptron.
What can a perceptron do?
As mentioned above, a perceptron calculates the weighted sum of the input
values. For simplicity, let us assume that there are two input values, x and
y for a certain perceptron P. Let the weights for x and y be A and B for
respectively, the weighted sum could be represented as: A x + B y.
Since the perceptron outputs an non-zero value only when the weighted
sum exceeds a certain threshold C, one can write down the output of this
perceptron as follows:
Output of P
=
{1 if A x + B y > C
{0 if A x + B y < = C
Recall that A x + B y > C and A x + B y < C are the two regions on the xy
plane separated by the line A x + B y + C = 0. If we consider the input (x, y)
as a point on a plane, then the perceptron actually tells us which region on
the plane to which this point belongs. Such regions, since they are
separated by a single line, are called linearly separable regions.
This result is useful because it turns out that some logic functions such as
the boolean AND, OR and NOT operators are linearly separable i.e. they
can be performed using a single perceprton. We can illustrate (for the 2D
case) why they are linearly separable by plotting each of them on a graph:


(Fig. 3) Graphs showing linearly separable
logic functions
In the above graphs, the two axes are the inputs which can take the value
of either 0 or 1, and the numbers on the graph are the expected output for
a particular input. Using an appropriate weight vector for each case, a
single perceptron can perform all of these functions.
However, not all logic operators are linearly separable. For instance, the
XOR operator is not linearly separable and cannot be achieved by a single
perceptron. Yet this problem could be overcome by using more than one
perceptron arranged in feed-forward networks.

(Fig. 4)
Since it is
impossible
to draw a
line to divide
the regions
containing
either 1 or 0,
the XOR
function is
not linearly
separable.

History: The 1940's to the 1970's
In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts
wrote a paper on how neurons might work. In order to describe how neurons
in the brain might work, they modeled a simple neural network using electrical
circuits.
In 1949, Donald Hebb wrote The Organization of Behavior, a work which
pointed out the fact that neural pathways are strengthened each time they are
used, a concept fundamentally essential to the ways in which humans learn. If
two nerves fire at the same time, he argued, the connection between them is
enhanced.
As computers became more advanced in the 1950's, it was finally possible to
simulate a hypothetical neural network. The first step towards this was made
by Nathanial Rochester from the IBM research laboratories. Unfortunately for
him, the first attempt to do so failed.
In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models
called "ADALINE" and "MADALINE." In a typical display of Stanford's love for
acronymns, the names come from their use of Multiple ADAptive LINear
Elements. ADALINE was developed to recognize binary patterns so that if it
was reading streaming bits from a phone line, it could predict the next bit.
MADALINE was the first neural network applied to a real world problem, using
an adaptive filter that eliminates echoes on phone lines. While the system is
as ancient as air traffic control systems, like air traffic control systems, it is still
in commercial use.
In 1962, Widrow & Hoff developed a learning procedure that examines the
value before the weight adjusts it (i.e. 0 or 1) according to the rule: Weight
Change = (Pre-Weight line value) * (Error / (Number of Inputs)). It is based on
the idea that while one active perceptron may have a big error, one can adjust
the weight values to distribute it across the network, or at least to adjacent
perceptrons. Applying this rule still results in an error if the line before the
weight is 0, although this will eventually correct itself. If the error is conserved
so that all of it is distributed to all of the weights than the error is eliminated.
Despite the later success of the neural network, traditional von Neumann
architecture took over the computing scene, and neural research was left
behind. Ironically, John von Neumann himself suggested the imitation of
neural functions by using telegraph relays or vacuum tubes.
In the same time period, a paper was written that suggested there could not
be an extension from the single layered neural network to a multiple layered
neural network. In addition, many people in the field were using a learning
function that was fundamentally flawed because it was not differentiable
across the entire line. As a result, research and funding went drastically down.
This was coupled with the fact that the early successes of some neural
networks led to an exaggeration of the potential of neural networks, especially
considering the practical technology at the time. Promises went unfulfilled,
and at times greater philosophical questions led to fear. Writers pondered the
effect that the so-called "thinking machines" would have on humans, ideas
which are still around today.
The idea of a computer which programs itself is very appealing. If Microsoft's
Windows 2000 could reprogram itself, it might be able to repair the thousands
of bugs that the programming staff made. Such ideas were appealing but very
difficult to implement. In addition, von Neumann architecture was gaining in
popularity. There were a few advances in the field, but for the most part
research was few and far between.
In 1972, Kohonen and Anderson developed a similar network independently
of one another, which we will discuss more about later. They both used matrix
mathematics to describe their ideas but did not realize that what they were
doing was creating an array of analog ADALINE circuits. The neurons are
supposed to activate a set of outputs instead of just one.
The first multilayered network was developed in 1975, an unsupervised
network.
History: The 1980's to the present
In 1982, interest in the field was renewed. John Hopfield of Caltech presented
a paper to the National Academy of Sciences. His approach was to create
more useful machines by using bidirectional lines. Previously, the connections
between neurons was only one way.
That same year, Reilly and Cooper used a "Hybrid network" with multiple
layers, each layer using a different problem-solving strategy.
Also in 1982, there was a joint US-Japan conference on
Cooperative/Competitive Neural Networks. Japan announced a new Fifth
Generation effort on neural networks, and US papers generated worry that the
US could be left behind in the field. (Fifth generation computing involves
artificial intelligence. First generation used switches and wires, second
generation used the transister, third state used solid-state technology like
integrated circuits and higher level programming languages, and the fourth
generation is code generators.) As a result, there was more funding and thus
more research in the field.
In 1986, with multiple layered neural networks in the news, the problem was
how to extend the Widrow-Hoff rule to multiple layers. Three independent
groups of researchers, one of which included David Rumelhart, a former
member of Stanford's psychology department, came up with similar ideas
which are now called back propagation networks because it distributes pattern
recognition errors throughout the network. Hybrid networks used just two
layers, these back-propagation networks use many. The result is that back-
propagation networks are "slow learners," needing possibly thousands of
iterations to learn.
Now, neural networks are used in several applications, some of which we will
describe later in our presentation. The fundamental idea behind the nature of
neural networks is that if it works in nature, it must be able to work in
computers. The future of neural networks, though, lies in the development of
hardware. Much like the advanced chess-playing machines like Deep Blue,
fast, efficient neural networks depend on hardware being specified for its
eventual use.
Research that concentrates on developing neural networks is relatively slow.
Due to the limitations of processors, neural networks take weeks to learn.
Some companies are trying to create what is called a "silicon compiler" to
generate a specific type of integrated circuit that is optimized for the
application of neural networks. Digital, analog, and optical chips are the
different types of chips being developed. One might immediately discount
analog signals as a thing of the past. However neurons in the brain actually
work more like analog signals than digital signals. While digital signals have
two distinct states (1 or 0, on or off), analog signals vary between minimum
and maximum values. It may be awhile, though, before optical chips can be
used in commercial applications.
Conventional computing versus artificial neural
networks
There are fundamental differences between conventional computing and the
use of neural networks. In order to best illustrate these differences one must
examine two different types of learning, the top-down approach and
the bottom-up approach. Then we'll look at what it means to learn and
finally compare conventional computing with artificial neural networks.
Top-down learning
With the advent of neural networks, there are several tradeoffs between the
von Neumann architecture and the architecture used in neural networks. One
of the fundamental differences between the two is that traditional computing
methods work well for problems that have a definite algorithm, problems that
can be solved by a set of rules. Problems such as creating graphs of
equations, keeping track of orders on amazon.com, and even to some extent
algorithms of simple games, are not difficult for a conventional computer.
In order to examine more deeply the benefits and tradeoffs of conventional
computing and think about how a computer learns, we need to introduce the
notion of top-down versus bottom-up learning.
The notion of top-down learning is best described in programming lingo as a
set of sequential, cascading if/else statements which are not self-modifying. In
other words, the computer is programmed to perform certain actions at pre-
defined decision points. Let's say you want to write an algorithm to decide
whether a human should go to sleep. The program may be simple:
if (IsTired()) {
GoToSleep();
} else {

StayAwake();
}
But in the real world, it is not that simple. If you are in class, you do not want
to sleep since you would be missing valuable information. Or if you are
working on an important programming assignment, you may be determined to
finish it before you go to sleep. A revised version may thus look like:
if (IsTired()) {
if (!IsInClass() && !WorkingOnProject()) {
GoToSleep();
} else {
if (IsInClass()) {
Stay Awake();
} else {
if (WorkingOnProject()) {
if (AssignmentIsDueTomorrow()) {
if (AssignmentIsCompleted()) {
GoToSleep();
} else {
StayAwake();
}
} else {
GoToSleep();
}
} else {
StayAwake();
}
}
} else {

StayAwake();
}
This "simple" program only looks at a few possible things which may affect the
decision--whether you're tired, you're in class, or you have an assignment due
tomorrow.
This is a decision that human beings make with ease (although college
students don't always make the correct decision), yet to program it into a
computer takes much more code than the lines presented above. There are
so many different aspects which might affect the ultimate decision that an
algorithm programmed into the computer using this top-down method would
be impossible.
This is the fundamental limitation of the top-down method: in order to program
a complex task, the programmer may have to spend years developing a
correct top-down approach. This doesn't even begin to include the subtle
possibilities which the programmer may not think of. Complex tasks may
never be programmed using the top-down approach.
Bottom-Up Learning
The bottom up approach learns more by example or by doing than a complex
set of cascading if/else statements. It tries to do something in a particular
fashion, and if that doesn't work, it tries something else. By keeping track of
the actions that didn't work and the ones that did, one can learn. Moreover,
the program is inherently self-modifying. One can make a program in C that
modifies itself, but the idea of code being self-modifying is avoided in the
commercial world because of the difficulty of debugging self-modifying code.
This is the way in which Arthur Samuel programmed his checkers-playing
machine. The machine started out incompetent at playing checkers. But as it
went through several games, it learned a little each time, adjusting its own
program, so that in the end it could beat the individual who programmed it.
Another reason for using a bottom-up approach in neural networks is that
there is already an existing example which solves the problems associated
with the approach: humans.
What does it mean to learn?
This brings us into the overriding philosophical question: what does it mean to
learn? Is learning in humans simply a bunch of cascading if/else statements
that, when we learn, modify themselves to create new combinations? Or does
interacting with the environment change the arrangement and intensity of
neurons to form a bottom-up approach?
It seems as though the human brain has a combination of each. We do not
have to learn that if we're hungry we have to eat or if we're tired, we need
sleep. These things seem to be hardwired into the human brain. Other things,
such as reading a book, come only with interacting with the environment and
the society around an individual.
It seems likely that many applications of neural networks in the future would
have to use a combination of instructions hardwired into the system and a
neural network. For example, the chess machine that Arthur Samuel built did
not start with a blank slate and learn how to play checkers over time; one
machine was provided with the knowledge of a checkers book--things like
which moves are ideal and which moves fail miserably.
Comparison between conventional computers and
neural networks
Parallel processing
One of the major advantages of the neural network is its ability to do many
things at once. With traditional computers, processing is sequential--one task,
then the next, then the next, and so on. The idea of threading makes it appear
to the human user that many things are happening at one time. For instance,
the Netscape throbber is shooting meteors at the same time that the page is
loading. However, this is only an appearance; processes are not actually
happening simultaneously.
The artificial neural network is an inherently multiprocessor-friendly
architecture. Without much modification, it goes beyond one or even two
processors of the von Neumann architecture. The artificial neural network is
designed from the onset to be parallel. Humans can listen to music at the
same time they do their homework--at least, that's what we try to convince our
parents in high school. With a massively parallel architecture, the neural
network can accomplish a lot in less time. The tradeoff is that processors have
to be specifically designed for the neural network.
The ways in which they function
Another fundamental difference between traditional computers and artificial
neural networks is the way in which they function. While computers function
logically with a set of rules and calculations, artificial neural networks can
function via images, pictures, and concepts.
Based upon the way they function, traditional computers have to learn by
rules, while artificial neural networks learn by example, by doing something
and then learning from it. Because of these fundamental differences, the
applications to which we can tailor them are extremely different. We will
explore some of the applications later in the presentation.
Self-programming
The "connections" or concepts learned by each type of architecture is different
as well. The von Neumann computers are programmable by higher level
languages like C or Java and then translating that down to the machine's
assembly language. Because of their style of learning, artificial neural
networks can, in essence, "program themselves." While the conventional
computers must learn only by doing different sequences or steps in an
algorithm, neural networks are continuously adaptable by truly altering their
own programming. It could be said that conventional computers are limited by
their parts, while neural networks can work to become more than the sum of
their parts.
Speed
The speed of each computer is dependant upon different aspects of the
processor. Von Neumann machines requires either big processors or the
tedious, error-prone idea of parallel processors, while neural networks
requires the use of multiple chips customly built for the application.
Architecture of Neural Networks
Competitive neural networks - Competitive neural networks set the different neurons
against each other, hoping that the "winner" will be close to the answer.
Feed forward networks - Feed forward networks only allow a signal to pass through the
neural network one way.
Different types of usage of neural networks - Different ways of using neural
networks to solve problems.
Variations on neural networks - Variations on some neural networks.
Simple competitive networks:
In classification and prediction problems, we are provided with training sets with
desired outputs, so backpropagation together with feed-forward networks are useful in
modeling the input-output relationship. However, sometimes we have to analyze raw
data of which we have no prior knowledge. The only possible way is to find out
special features of the data and arrange the data in clusters so that elements that are
similar to each other are grouped together. Such a process can be readily performed
using simple competitive networks.
Simple competitive networks are composed of two networks: the Hemming net and
the Maxnet. Each of them specializes in a different function:
1. The Hemming net measures how much the input vector resembles the weight vector of each
perceptron.
2. The maxnet finds the perceptron with the maximum value.
In order to understand how these two seemingly unrelated networks function together,
we need to examine more closely the details of each one.
The Hemming net:

(Fig.1) A Hemming net.
Each perceptron at the top layer of the Hemming net calculates a weighted sum of the
input values. This weighted sum can be interpreted as the dot product of the input
vector and the weight vector.

, where i and w are the input vector and the weight vector respectively.
If w and i are of unit length, then the dot product depends only on cos theta. Since the
cosine function increases as the angle decreases, the dot product gets bigger when the
two vectors are close to each other (the angle between them is small). Hence the
weighted sum each perceptron calculates is a measure of how close its weight vector
resembles the input vector.
The Maxnet:

(Fig.2) A Hemming net.
The maxnet is a fully connected network with each node connecting to every other
nodes, including itself. The basic idea is that the nodes compete against each other by
sending out inhibiting signals to each other.
This is done by setting the weights of the connections between different nodes to be
negative and applying the following algorithm:
Algorithm

Using the above algorithm, all nodes converge to 0 except for the node with the
maximum initial value. In this way the maxnet finds out the node with the maximum
value.
Putting them together:

(Fig.3) A Simple Competitive network.
In a simple competitive network, a Maxnet connects the top nodes of the Hemming
net. Whenever an input is presented, the Hemming net finds out the distance of
the weight vector of each node from the input vector via the dot product, while the
Maxnet selects the node with the greatest dot product. In this way, the whole network
selects the node with its weight vector closest to the input vector, i.e. the winner.
The network learns by moving the winning weight vector towards the input vector:

while the other weight vectors remain unchanged.

(Fig.4) The winner learns by moving towards the input vector.
This process is repeated for all the samples for many times. If the samples are in
clusters, then every time the winning weight vector moves towards a particular sample
in one of the clusters. Eventually each of the weight vectors would converge to the
centroid of one cluster. At this point, the training is complete.

(Fig.5) After training, the weight vectors become centroids of various clusters.
When new samples are presented to a trained net, it is compared to the weight vectors
which are the centroids of each cluster. By measuring the distance from the weight
vectors using the Hemming net, the sample would be correctly grouped into the
cluster to which it is closest
Feed-Forward networks:

(Fig.1) A feed-forward network.
Feed-forward networks have the following characteristics:
1. Perceptrons are arranged in layers, with the first layer taking in inputs and the last layer
producing outputs. The middle layers have no connection with the external world, and hence
are called hidden layers.
2. Each perceptron in one layer is connected to every perceptron on the next layer. Hence
information is constantly "fed forward" from one layer to the next., and this explains why
these networks are called feed-forward networks.
3. There is no connection among perceptrons in the same layer.
What's so cool about feed-forward networks?
Recall that a single perceptron can classify points into two regions that are linearly
separable. Now let us extend the discussion into the separation of points into two
regions that are not linearly separable. Consider the following network:

(Fig.2) A feed-forward network with one hidden layer.
The same (x, y) is fed into the network through the perceptrons in the input layer.
With four perceptrons that are independent of each other in the hidden layer, the point
is classified into 4 pairs of linearly separable regions, each of which has a unique line
separating the region.

(Fig.3) 4 lines each dividing the plane into 2 linearly separable regions.
The top perceptron performs logical operations on the outputs of the hidden layers so
that the whole network classifies input points in 2 regions that might not be linearly
separable. For instance, using the AND operator on these four outputs, one gets the
intersection of the 4 regions that forms the center region.

(Fig.4) Intersection of 4 linearly separable regions forms the center region.
By varying the number of nodes in the hidden layer, the number of layers, and the
number of input and output nodes, one can classification of points in arbitrary
dimension into an arbitrary number of groups. Hence feed-forward networks are
commonly used for classification.
Backpropagation -- learning in feed-forward networks:
Learning in feed-forward networks belongs to the realm of supervised learning, in
which pairs of input and output values are fed into the network for many cycles, so
that the network 'learns' the relationship between the input and output.
We provide the network with a number of training samples, which consists of an input
vector i and its desired output o. For instance, in the classification problem, suppose
we have points (1, 2) and (1, 3) belonging to group 0, points (2, 3) and (3, 4)
belonging to group 1, (5, 6) and (6, 7) belonging to group 2, then for a feed-forward
network with 2 input nodes and 2 output nodes, the training set would be:
{ i = (1, 2) , o =( 0, 0)

i = (1, 3) , o = (0, 0)

i = (2, 3) , o = (1, 0)

i = (3, 4) , o = (1, 0)

i = (5, 6) , o = (0, 1)

i = (6, 7) , o = (0, 1) }
The basic rule for choosing the number of output nodes depends on the number of
different regions. It is advisable to use a unary notation to represent the different
regions, i.e. for each output only one node can have value 1. Hence the number of
output nodes = number of different regions -1.
In backpropagation learning, every time an input vector of a training sample is
presented, the output vector o is compared to the desired value d.
The comparison is done by calculating the squared difference of the two:

The value of Err tells us how far away we are from the desired value for a particular
input. The goal of backpropagation is to minimize the sum of Err for all the training
samples, so that the network behaves in the most "desirable" way.
Minimize

We can express Err in terms of the input vector (i), the weight vectors (w), and the
threshold function of the perceptions. Using a continuous function (instead of the step
function) as the threshold function, we can express the gradient of Err with respect to
the w in terms of w and i.
Given the fact that decreasing the value of w in the direction of the gradient leads to
the most rapid decrease in Err, we update the weight vectors every time a sample is
presented using the following formula:

where n is the learning rate (a small
number ~ 0.1)
Using this algorithm, the weight vectors are modified so that the value of Err for a
particular input sample decreases a little bit every time the sample is presented. When
all the samples are presented in turns for many cycles, the sum of Err gradually
decreases to a minimum value, which is our goal as mentioned above.
Some specific details of neural networks:
Although the possibilities of solving problems using a single perceptron is limited, by arranging
many perceptrons in various configurations and applying training mechanisms, one can actually
perform tasks that are hard to implement using conventional Von Neumann machines.
We are going to describe four different uses of neural networks that are of great significance:
1. Classification. In a mathematical sense, this involves dividing an n-dimensional space into
various regions, and given a point in the space one should tell which region to which it
belongs. This idea is used in many real-world applications, for instance, in various pattern
recognition programs. Each pattern is transformed into a multi-dimensional point, and is
classified to a certain group, each of which represents a known pattern.
Type of network used:

Feed-forward networks

2. Prediction. A neural network can be trained to produce outputs that are expected given a
particular input. If we have a network that fits well in modeling a known sequence of values,
one can use it to predict future results. An obvious example is stock market prediction.
Type of network used:

Feed-forward networks

3. Clustering. Sometimes we have to analyze data that are so complicated there is no obvious
way to classify them into different categories. Neural netowrks can be used to identify
special features of these data and classify them into different categories without prior
knowledge of the data. This technique is useful in data-mining for both commercial and
scientific uses.
Type of network used:

Simple Competitive
Networks

Adaptive Resonance Theory
(ART) networks

Kohonen Self-Organizing
Maps (SOM)

4. Association. A neural network can be trained to "remember" a number of patterns, so that
when a distorted version of a particular pattern is presented, the network associates it with
the closest one in its memory and returns the original version of that particular pattern. This
is useful for restoring noisy data.
Type of network used:

Hopfield networks

The above is just a general picture of what neural networks can do in real life. There are many
creative uses of neural networks that arises from these general applications. One example is
image compression using association networks; another is solving the Travelling Salesman's
Problem using clustering networks.
Variations on Simple Competitive networks:

Adaptive Resonance Theory (ART)
The number of nodes at the output layer is variable. Hence the number of clusters formed is
flexible and depends on the input data.

Kohonen Self-Organizing Maps (SOM)
While in simple competitive networks only the winner learns, in SOM's learning occurs at nodes
that are in a neighborhood of the winner.

Applications of neural networks
Character Recognition - The idea of character recognition has become very important as
handheld devices like the Palm Pilot are becoming increasingly popular. Neural networks can be
used to recognize handwritten characters.
Image Compression - Neural networks can receive and process vast amounts of
information at once, making them useful in image compression. With the Internet explosion and
more sites using more images on their sites, using neural networks for image compression is
worth a look.
Stock Market Prediction - The day-to-day business of the stock market is extremely
complicated. Many factors weigh in whether a given stock will go up or down on any given day.
Since neural networks can examine a lot of information quickly and sort it all out, they can be
used to predict stock prices.
Traveling Saleman's Problem - Interestingly enough, neural networks can solve the
traveling salesman problem, but only to a certain degree of approximation.
Medicine, Electronic Nose, Security, and Loan Applications - These are some
applications that are in their proof-of-concept stage, with the acception of a neural network that
will decide whether or not to grant a loan, something that has already been used more
successfully than many humans.
Miscellaneous Applications - These are some very interesting (albeit at times a little
absurd) applications of neural networks.
Application of Feed-forward networks - character
recognition:

(Fig.1) A feed-forward network for character recognition.
The idea of using feedforward networks to recognize handwritten characters is rather
straightforward. As in most supervised training, the bitmap pattern of the handwritten character
is treated as an input, with the correct letter or digit as the desired output. Normally such
programs require the user to train the network by providing the program with their handwritten
patterns.
One may visit Bob Mitchell's page for an interesting java applet demo.
Neural Networks and Image Compression
Because neural networks can accept a vast array of input at once, and process it quickly, they are
useful in image compression.
Bottleneck-type Neural Net
Architecture for Image Compression

Image courtesy
of:http://neuron.eng.wayne.edu/bpImageCompression9PLUS/bp9PLUS.html

Here is a neural net architecture suitable for
solving the image compression problem.
This type of structure is referred to as a
bottleneck type network, and consists of an
input layer and an output layer of equal
sizes, with an intermediate layer of smaller
size in-between. The ratio of the size of the
input layer to the size of the intermediate
layer is - of course - the compression ratio.

This is the same image compression
scheme, but implemented over a network.
The transmitter encodes and then
transmits the output of the hidden layer,
and the receiver receives and decodes the
16 hidden outputs to generate 64 outputs.
Bottleneck architecture for image
compression over a network or over time

Image courtesy
of:http://neuron.eng.wayne.edu/bpImageCompression9PLUS/bp9PLUS.html
This is the same image compression scheme, but implemented over a network. The transmitter
encodes and then transmits the output of the hidden layer, and the receiver receives and decodes
the 16 hidden outputs to generate 64 outputs.
Pixels, which consist of 8 bits, are fed into each input node, and the hope is that the same pixels
will be outputted after the compression has occurred. That is, we want the network to perform
the identity function.
Actually, even though the bottleneck takes us from 64 nodes down to 16 nodes, no real
compression has occurred. The 64 original inputs are 8-bit pixel values. The outputs of the
hidden layer are, however, decimal values between -1 and 1. These decimal values can require a
possibly infinite number of bits.
How to get around this? The answer is...
Quantization for Image Compression
We do not need to be able to represent every
possible decimal value, but instead can clump the
possible values into, say, 3-bit units. 8 possible binary
codes can be formed: 000, 001, 010, 011, 100, 101, 110, 111, and each of these codes represents
a range of values for a hidden unit output. For example, when a hidden value is between -1.0 and
-.75, the code 000 is transmitted.
Hence, the image is compressed from 64 pixels * 8 bits each = 512 bits to 16 hidden values * 3
bits each = 48 bits : the compressed image is about 1/10th the size of the original!
However, this encoding scheme is not lossless; the original image cannot be retrieved because
information is lost in the process of quantizing. Nonetheless, it can produce quite good results, as
shown (though, of course, this image has been turned into a jpeg as well, so the actual results of
the original compressopm cannot be seen. However, the contrast between the two images (or the
lack of contrast, as is the goal of compression!) should be sufficient for this brief discussion.
ORIGINAL RECONSTRUCTED IMAGE

How does a network learn to do this?
** See neural network learning and image compression firsthand
at: http://neuron.eng.wayne.edu/bpImageCompression9PLUS/bp9PLUS.html**
The goal of these data compression networks is to re-create the input itself. Hence, the input is
used as its own output for training purposes. The input (for the applet) is produced from the
256x256 training image by extracting small 8x8 chunks of the image chosen at a uniformly
random location in the image. This data is presented over and over, and the weights adjusted,
until the network reproduces the image relatively faithfully. You can see this randomization of
the 8x8 chunks in the top row of the applet. The randomization helps ensure generalization of the
network so that it will work on other images as well.
Once training is complete, image re-construction is demonstrated in the recall phase. In this case,
we still present the neural net with 8x8 chunks of the image, but now instead of randomly
selecting the location of each chunk, we select the chunks in sequence from left to right and from
top to bottom. For each such 8x8 chunk, the output the network can be computed and displayed
on the screen to visually observe the performance of neural net image compression.
We can test how well the network has generalized the data by testing image compression on
other pictures, such as the one on the bottom. And we can continue to train the network if the
output is not of high enough quality.
Neural networks and financial prediction
Neural networks have been touted as all-powerful tools in stock-market prediction. Companies
such as MJ Futures claim amazing 199.2% returns over a 2-year period using their neural
network prediction methods. They also claim great ease of use; as technical editor John Sweeney
said in a 1995 issue of "Technical Analysis of Stocks and Commodities," "you can skip
developing complex rules (and redeveloping them as their effectiveness fades) . . . just define the
price series and indicators you want to use, and the neural network does the rest."
These may be exaggerated claims, and, indeed, neural networks may be easy to use once the
network is set up, but the setup and training of the network requires skill, experience, and
patience. It's not all hype, though; neural networks have shown success at prediction of market
trends.
The idea of stock market prediction is not new, of course. Business people often attempt to
anticipate the market by interpreting external parameters, such as economic indicators, public
opinion, and current political climate. The question is, though, if neural networks can discover
trends in data that humans might not notice, and successfully use these trends in their
predictions.
Good results have been achieved by Dean Barr and Walter
Loick at LBS Capital Management using a relatively simple
neural network with just 6 financial indicators as inputs. These
inputs include the ADX, which indicates the average
directional movement over the previous 18 days, the current
value of the S&P 500, and the net change in the S&P
500 value from five days prior (see David Skapura's book
"Building Neural Networks," p129-154, for more detailed
information).
This is a simple back-propagation network of three layers, and it is trained and tested on a high
volume of historical market data. The challenge here is not in the network architecture itself, but
instead in the choice of variables and the information used for training. I could not find the
accuracy rates for this network, but my source claimed it achieved "remarkable success" (this
source was a textbook, not a NN-prediction-selling website!).
Even better results have been achieved with a back-propagated neural network with 2 hidden
layers and many more than 6 variables. I have not been able to find more details on these
network architectures, however; the companies that work with them seem to want to keep their
details secret.
Additional Neural Network Applications in the financial world:
o Currency prediction
o Futures prediction
o Bond ratings
o Business failure prediction
o Debt risk assessment
o Credit approval
o Bank theft
o Bank failure

Want to find out more about NN financial predictions? Check out these
companies:
o M.J. Futures: http://www6.bcity.com/mjfutures/
o TradeTrek: http://www.tradetrek.com/education/ai/ai_stock_trading03.asp
o Free Financial Market Signals: http://www.financialtimingmodels.com/
o WeTradeFutures.com: www.wetradefutures.com

The travelling salesman's problem:
Definition: Given a number of cities on a plane, find the shortest path through which one can
visit all of the cities.
In contrast to using recursion to try all the different possibilities, we are going to approximate the
solution using a Kohonen SOM, which organizes itself like a elastic rubber band. The basic idea
of this solution is to start out with a elastic net in a random orientation:

(Fig.1) A randon ring.
The algorithm is to choose a random city each time, and pull the point on the net that is closest to
the city towards the city:

(Fig.2) A point pulled towards Vashon Island brings the net closer to the city.
Since the net is elastic, each pull changes the shape of the net. After many pulls for different
cities, the net eventually converges into a ring around the cities. Given the elastic property of the
ring, the length of the ring tends to be minimized. This is how the TSP can be approximated.
The details:
As in simple competitive networks, the weight vectors of the networks are randomly assigned at
the beginning in SOM's. Another similarity is that SOM also identifies the winner (the
perceptron whose weight vector is closest to the input vector) every time an input vector is
presented. The only difference is that while in simple competitive networks only the winner
learns, in SOM's all nodes learn, but the rate of learning varies inversely with the node's physical
distance from the winner.
The Kohonen SOM used in solving the TSP has the following structure:

(Fig.3) A Kohonen net with a ring-shaped top layer.
Recall the elastic rubber-band model mentioned above, such a ring shaped map simulates a
rubber-band if we consider the weight vectors as points on a plane. We can join these points
together according to the position of their respective perceptron in the ring of the top layer of the
network.
Suppose the coordinates of a city (x, y) is presented as the input vector of the network, the
network will identify the weight vector closest to the city and move it and its neighbors closer to
the city. In this manner, the weight vectors behave as points on a rubber band and each
"learning" is analogous to pulling the closest point of the band towards a city. The rule that the
amount of learning varies inversely with the physical distance between the node and the winner
is what leads to the elastic property of the rubber band.
For a more detailed explanation of this problem, please visit our resource for this problem.
Applications of neural networks
Medicine
One of the areas that has gained attention is in cardiopulmonary diagnostics. The ways neural
networks work in this area or other areas of medical diagnosis is by the comparison of many
different models. A patient may have regular checkups in a particular area, increasing the
possibility of detecting a disease or dysfunction.
The data may include heart rate, blood pressure, breathing rate, etc. to different models. The
models may include variations for age, sex, and level of physical activity. Each individual's
physiological data is compared to previous physiological data and/or data of the various generic
models. The deviations from the norm are compared to the known causes of deviations for each
medical condition. The neural network can learn by studying the different conditions and
models, merging them to form a complete conceptual picture, and then diagnose a patient's
condition based upon the models.
Electronic Noses
The idea of a chemical nose may seem a bit absurd, but it
has several real-world applications. The electronic nose is
composed of a chemical sensing system (such as a
spectrometer) and an artificial neural network, which
recognizes certain patterns of chemicals. An odor is
passed over the chemical sensor array, these chemicals
are then translated into a format that the computer can
understand, and the artificial neural network identifies the
chemical.
A list at the Pacific Northwest Laboratory has several
different applications in the environment, medical, and
food industries.
Environment: identification of toxic wastes, analysis of fuel mixtures (7-11 example), detection
of oil leaks, identification of household odors, monitoring air quality, monitoring factory
emission, and testing ground water for odors.
Medical: The idea of using these in the medical field is to examine odors from the body to
identify and diagnose problems. Odors in the breath, infected wounds, and body fluids all can
indicate problems. Artificial neural networks have even been used to detect tuberculosis.
Food: The food industry is perhaps the biggest practical market for electronic noses, assisting or
replacing entirely humans. Inspection of food, grading quality of food, fish inspection,
fermentation control, checking mayonnaise for rancidity, automated flavor control, monitoring
cheese ripening, verifying if orange juice is natural, beverage container inspection, and grading
whiskey.
Security
One program that has already been started is the CATCH program. CATCH, an acronymn for
Computer Aided Tracking and Characterization of Homicides. It learns about an existing crime,
the location of the crime, and the particular characteristics of the offense. The program is
subdivided into different tools, each of which place an emphasis on a certain characteristic or
group of characteristics. This allows the user to remove certain characteristics which humans
determine are unrelated.
Loans and credit cards
Loan granting is one area in which neural networks can aid humans, as it is an area not based on
a predetermined and preweighted criteria, but answers are instead nebulous. Banks want to make
as much money as they can, and one way to do this is to lower the failure rate by using neural
networks to decide whether the bank should approve the loan. Neural networks are particularly
useful in this area since no process will guarantee 100% accuracy. Even an 85-90% accuracy
would be an improvement over the methods humans use.

An actual electronic "nose"
Image courtesy Pacific Northwest Laboratory
In fact, in some banks, the failure rate of loans approved using neural networks is lower than that
of some of their best traditional methods. Some credit card companies are now beginning to use
neural networks in deciding whether to grant an application.
The process works by analyzing past failures and making current decisions based upon past
experience. Nonetheless, this creates its own problems. For example, the bank or credit company
must justify their decision to the applicant. The reason "my neural network computer
recommended against it" simply isn't enough for people to accept. The process of explaining how
the network learned and on what characteristics the neural network made its decision is difficult.
As we alluded to earlier in the history of neural networks, self-modifying code is very difficult to
debug and thus difficult to trace. Recording the steps it went through isn't enough, as it might be
using conventional computing, because even the individual steps the neural network went
through have to be analyzed by human beings, or possibly the network itself, to determine that a
particular piece of data was crucial in the decision-making process.
Other Applications of Neural Networks
Here are a few of the more quirky applications of Neural Networks:
<<to top>>

image courtesy Notre
Dame
Bill's Notre Dame Football Predictor
1. Train the network on historical data for offensive plays
given particular game situations
2. Use network to predict what offensive play will be chosen
at any point in the game
http://robby.caltech.edu/~goodwine/football.html

<<to top>>

Getting rat thoughts to move robotic
parts
1. Train a rat to press a lever, which activates a robotic
arm. Robotic arm delivers reward to rat.
2. Attach a 16-probe array to the rat's brain that can
record the activity of 30 neurons at once.
3. Train a neural network program to recognize brain-
activity patterns during a lever press.
4. Neural network can predict movement from the rat's
brain activity alone, so when the rat's brain activity
indicates that it is about to press the lever, robotic arm
moves and rewards the rat - the rat does not need to
press the lever, but merely needs to "think" about
doing so (whatever rat "thinking" may be)
http://www.sciam.com/1999/1199issue/1199techbus2.
html

<<to top>>
ALVINN, the self-driving car

(this car is not ALVINN, it's from PhotoEssentials)
http://www.cs.cmu.edu/afs/cs.cmu.edu/project/alv/member/www/projects/ALVINN.html
1. Train single hidden layer back-propagation network on images of the road under a variety of
conditions and the appropriate steering modification for each condition.
2. Allow car to drive itself: a video image from the onboard camera is injected into the input
layer. Activation is passed forward through the network and a steering command is read off the
output layer. The most active output unit determines the direction in which to steer.
3. Allow car to drive itself at speeds up to 70 mph on Pittsburgh freeways
<<to top>>

An Analysis of Sheep Rumination and
Mastication
Anthony Zaknich and Sue K Baker (1998), "A real-time system for the
characterisation of sheep feeding phases from acoustic signals of jaw
sounds," Australian Journal of Intelligent Information Processing
Systems (AJIIPS), Vol. 5, No. 2,Winter 1998.
1. Attach radio microphones to the top of sheep heads to
transmit chewing sounds
2. Record chewing sounds and times of chewing
3. Use a neural network classifier, using your time and
frequency data as input, to predict future rumination and
mastication time periods
Why??? we haven't a clue! The online abstract didn't tell
us.

<<to top>>

To Neural Networks and Beyond!
Neural Networks and Consciousness
So, neural networks are very good at a wide variety of problems, most of which
involve finding trends in large quantities of data. They are better suited than
traditional computer architecture to problems that humans are naturally good at
and which computers are traditionally bad at ? image recognition, making
generalizations, that sort of thing. And researchers are continually constructing
networks that are better at these problems.
But will neural networks ever fully simulate the human brain? Will they be as complex and as
functional? Will a machine ever be conscious of its own existence?
Simulating human consciousness and emotion is still the realm of science fiction. It may happen
one day, or it may not ? this is an issue we won't delve into here, because, of course, there are
huge philosophical arguments about what consciousness is, and if it can possibly be simulated by
a machine... do we have souls or some special life-force that is impossible to simulate in a
machine? If not, how do we make the jump from, as one researcher puts it, "an electrical reaction
in the brain to suddenly seeing the world around one with all its distances, its colors and
chiaroscuro?" Well, like I said, we won't delve into it here; the issue is far too deep, and, in the
end, perhaps irresolvable... (if you want to delve, check
outhttp://www.culture.com.au/brain_proj/neur_net.htm and http://www.iasc-
bg.org.yu/Papers/Work-97/work-97.html)
Perhaps NNs can, though, give us some insight into the "easy problems" of consciousness: how
does the brain process environmental stimulation? How does it integrate information? But, the
real question is, why and how is all of this processing, in humans, accompanied by an
experienced inner life, and can a machine achieve such a self-awareness?
Of course, the whole future of neural networks does not reside in attempts to simulate
consciousness. Indeed, that is of relatively small concern at the moment; more pressing are issues
of how to improve the systems we have.
<<to top>>
Recent advances and future applications of NNs include:
Integration of fuzzy logic into neural networks
Fuzzy logic is a type of logic that recognizes more than simple true and false values,
hence better simulating the real world. For example, the statement today is sunny might
be 100% true if there are no clouds, 80% true if there are a few clouds, 50% true if it's
hazy, and 0% true if rains all day. Hence, it takes into account concepts like -usually,
somewhat, and sometimes.
Fuzzy logic and neural networks have been integrated for uses as diverse as automotive
engineering, applicant screening for jobs, the control of a crane, and the monitoring of
glaucoma.
See www.fuzzytech.com for more information

Pulsed neural networks
"Most practical applications of artificial neural networks are based on a computational
model involving the propagation of continuous variables from one processing unit to the
next. In recent years, data from neurobiological experiments have made it increasingly
clear that biological neural networks, which communicate through pulses, use the
timing of the pulses to transmit information and perform computation. This
realization has stimulated significant research on pulsed neural networks, including
theoretical analyses and model development, neurobiological modeling, and hardware
implementation." (from http://www.tu-graz.ac.at/igi/maass/PNN.html )

Hardware specialized for neural networks
Some networks have been hardcoded into chips or analog devices ? this technology will
become more useful as the networks we use become more complex.
The primary benefit of directly encoding neural networks onto chips or specialized
analog devices is SPEED!
NN hardware currently runs in a few niche areas, such as those areas where very high
performance is required (e.g. high energy physics) and in embedded applications of
simple, hardwired networks (e.g. voice recognition).
Many NNs today use less than 100 neurons and only need occasional training. In these
situations, software simulation is usually found sufficient
When NN algorithms develop to the point where useful things can be done with 1000's of
neurons and 10000's of synapses, high performance NN hardware will become essential
for practical operation.
(from http://www.particle.kth.se/~lindsey/HardwareNNWCourse/ )

Improvement of existing technologies
All current NN technologies will most likely be vastly improved upon in the future.
Everything from handwriting and speech recognition to stock market prediction will
become more sophisticated as researchers develop better training methods and network
architectures.
NNs might, in the future, allow:
o robots that can see, feel, and predict the world around them
o improved stock prediction
o common usage of self-driving cars
o composition of music
o handwritten documents to be automatically transformed into formatted word
processing documents
o trends found in the human genome to aid in the understanding of the data
compiled by the Human Genome Project
o self-diagnosis of medical problems using neural networks
o and much more!
<<to top>>
A word of caution (and a funny story!):
Although neural networks do seem to be able to solve many problems, we must put our
exuberance in check sometimes ? they are not magic! Overconfidence in neural networks
can result in costly mistakes: see http://vv.carleton.ca/~neil/neural/tank.html for a rather
funny story about the government and neural networks.

<<to top>>
Want to find out more? Try these sites:
Neural Networks and the Computational Brain, a somewhat technical and somewhat
philosophical treatise on the potential of modeling
consciousness:http://www.culture.com.au/brain_proj/neur_net.htm
ThoughtTreasure, a database of 25,000 concepts, 55,000 English and French words and
phrases, 50,000 assertions, and 100 scripts, which is attempting to bring natural language
and commonsense capabilities to computers: http://www.signiform.com/tt/htm/tt.htm.
The conclusions page is especially clear and
useful: http://www.signiform.com/tt/book/Concl.html
Hierarchical Neural Networks and Brainwaves: Towards a Theory of
Consciousness: This paper gives "a comparative biocybernetical analysis of the
possibilities in modeling consciousness and other psychological functions (perception,
memorizing, learning, emotions, language, creativity, thinking, and transpersonal
interactions!), by using biocybernetical models of hierarchical neural networks and
brainwaves." " http://www.iasc-bg.org.yu/Papers/Work-97/work-97.html
Fuzzy Logic Information: FuzzyTech:http://www.fuzzytech.com/: a vast resource for
information, simulations, and applications of fuzzy logic.
Neural Networking Hardware: Neural Networks in Hardware: Architectures, Products
and Applications http://www.particle.kth.se/~lindsey/HardwareNNWCourse/
<<to top>>
Neural Network Resources
Note: One of the easiest ways to find good resources on neural networks is to go
to www.about.com or www.infind.com and enter search query: "neural networks."
Another GREAT list of Neural Network Resources ("If it's on the web, it's listed here")
is http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm

Introductions to Neural Networks
Applications of Neural Networks (and some fun applets!)
Neural Networks and Simulated Consciousness
Books

<< to top>>
Introductory Material
FAQs: Frequently Asked Questions: Newsgroup:This site contains detailed but
comprehensible answers to common questions about NNs. Particularly useful for me was
the question "What can you do with an ANN?" It also contains an excellent set of links to
online NN resources. Highly recommended for content, if not for beauty of site design.
Fabulous NN introduction: Artificial Neural Networks Technology (from the
Department of Defense Information Analysis Center):An excellent introduction to the
basic principles of neural networks, this article has many clear graphics and non-
mathematical, but thorough, explanations. It investigates the basic architecture of a neural
network, including the various configurations and learning mechanisms, and also
discusses the history and future applications of the neural network.
A textbook and network simulator in one: Brain Wave:This introductory, online
textbook also contains an interactive applet for making one's own simple networks. This
text contains many excellent, clear graphics, and non-technical explanations of the
various neural net architectures.
Fun, simple applets: Web Applets for Interactive Tutorials on Artificial Neural Learning
Animated neuron and NN Introduction: An Introduction To Neural Networks:This
page contains a great animated gif of a biological neuron, and also includes a general
introduction to neural networks.
General Introduction: Generation 5'sIntroduction to Neural Networks:This article
provides an introduction to neural networks for a non-technical audience.
Introduction to Kohonen networks: Kohonen Networks:This article deals with one
specific type of neural network, the Kohonen network, which is used to execute
unsupervised learning. In unsupervised learning, there is no comparison of the network's
answer to specific desired output, and the simulated neurons have the property of self-
organization.
Introduction and comparison with Von Neumann: An Introduction to Neural
Networks:Dr Leslie Smith presents a good comparison between the advantages and
disadvantages of the von Neumann architecture and neural networks, and also provides a
quick survey of different kinds of networks. These include the BP network, RGF
network, simple perceptron network, and Kohonen network. The article also examines
where neural networks are applicable and where they may possibly be applied in the
future.
Somewhat Technical NN Description: Statsoft's Neural Networks:This article provides
a lengthy and somewhat technical description of neural networks. It looks at RBF
networks, probabilistic neural networks, generalized regression neural networks, linear
networks, and Kohonen networks. It looks at the artificial model of neural networks and
how the human brain is modeled with neural networks. It also examines feedforward
structures and the structures most useful in solving problems.
Applets for XOR and Kohonen: The HTML Neural Net Consulter.
Intro from ZSolutions, a NN-provider: Introduction to Neural Networks:Using a
simple example of a neural network developed to predict the number of runs scored by a
baseball team, this article investigates the architecture and potential uses of neural
networks. It also serves as promotional material: Z Solutions provides neural networks
for corporations.
Intro 2 from ZSolutions, a NN-provider: Want to Try Neural Nets?A somewhat
propaganda-esque document for a neural networking company, this article nonetheless
provides a good introductory survey of neural networks and the potential for
implementing them in the real world.
Fuzzy Logic Information: FuzzyTech:a vast resource for information, simulations, and
applications of fuzzy logic.


<< to top>>
Applications of Neural Networks (and some fun applets!)
Information on over 50 NN applications: BrainMaker Neural Network Software:Great
list of examples of specific NN applications regarding stocks, business, medicine, and
manufacturing.
Applet for 8-queens problem: 8-queens problem and neural networks
Applet for Travelling Salesman: Elastic Net Method for Travelling Salesman
Problem:The elastic net is a kind of neural networks which is used for optimization
problems; this applet demonstrates its use applied to the Travelling Salesman Problem.
Handwriting Recognition Applet: Artificial Neural Network Handwriting
Recognizer:This applet demonstrates neural networks applied to handwriting recognition.
Users train a neural network on 10 digits of their handwriting and then test its recognition
accuracy.
A few applets: Takefuji LabsNeural demo by Java.
NNs as Information Analyzers: Technology Brief: Artificial Neural Networks as
Information Analysis Tools:The article discusses how neural networks, as information
analysis tools, can be used in the medical industry, the environment, security and law
enforcement, and mechanical system diagnosis.

NNS for Medical Diagnosis: Technology Brief: Computer Assisted Medical
Diagnosis:This article looks at the ways in which a neural network can assist doctors in
medical diagnoses. It investigates uses in the cardiovascular system, diagnosing coronary
artery disease, and chemical analysis.
Applet for Travelling Salesman: Travelling Salesman ProblemThis applet demonstrates
a neural network applied to a 2-D travelling salesman problem.
Applet for Image Compression: Image Compression using Backprop.


Fuzzy Logic Information: FuzzyTech:a vast resource for information, simulations, and
applications of fuzzy logic.
<< to top>>
Neural Networks and Simulated Consciousness
Technical / Philosophical Paper: Neural Networks and the Computational Brain

Database of Common Sense: ThoughtTreasure:ThoughtTreasure is a database of 25,000
concepts, 55,000 English and French words and phrases, 50,000 assertions, and 100
scripts, which is attempting to bring natural language and commonsense capabilities to
computers. The conclusions page is especially clear and concise.


Long and thorough paper from Yugoslavia: Hierarchical Neural Networks and
Brainwaves: Towards a Theory of Consciousness:This paper gives "a comparative
biocybernetical analysis of the possibilities in modeling consciousness and other
psychological functions (perception, memorizing, learning, emotions, language,
creativity, thinking, and transpersonal interactions!), by using biocybernetical models of
hierarchical neural networks and brainwaves."


<< to top>>
Books
Mehrotra, Kishan, Chilukuri Mohan, and Sanjay Ranka. Elements of Artificial Neural
Networks. Boston: MIT Press, 1997.
Skapura, David M. Building Neural Networks. Menlo Park, CA: Addison-Wesley
Publishing Company, 1996. This is an introductory textbook for the construction of
actual neural networks. It investigates many of the potential uses of neural networks in a
manner aimed at allowing students themselves to create these networks. Sample C code is
also provided.
<< to top>>

Das könnte Ihnen auch gefallen