Sie sind auf Seite 1von 34

RESEARCH PAPER

On
APPLICATION OF ARTIFICIAL NEURAL NETWORK IN CHARACTER
RECOGNITION
AND
ROBOT KINEMATICS
Submitted By
MIT H. PANDYA
As a part of term work prescribed by North Maharashtra University, Jalgaon
Third Year
in
ELECTRONICS & COMMUNICATION ENGINEERING

Department of Electronics & Communication Engineering


SHRI SANT GADGE BABA
COLLEGE OF ENGINEERING AND TECHNOLOGY,BHUSAWAL
(2010-2011)

INDEX
CHAPTERS
List Of Figures

Page No.

Fig.1

vii

Fig 2.

xi

Fig 3.

xi

Fig 4.

xx

Fig 5.

xxi

Fig 6.

xxii

Fig 7.

xxv

Fig 8.

xxvii

Fig 9.

xxx

Fig 10.

xxx

Fig 11.

xxxi

Fig 12.

xxxi

List Of Tables

Page No.

Table 1.

xii

Table 2.

xiii

1. INTRODUCTION

2. APPLICATION OF ANN IN CHARACTER RECOGNITION

2.1 Objectives

2.2 Using Backpropagation In A Neural Network

2.3 Input Data

2.4 Hidden Layer

2.5 Output Data

2.6 The Program

2.7 Interpreting A Word

2.8 Learning Characters

2.9 results

2.10 Selecting The Number Of Hidden Units

2.11 Training Patterns And The Learning Procedure

2.12 Testing Patterns And The Testing Procedure

10

2.13 suggested Improvements

10

2.14 Character Scaling

11

2.15 Character Centering

11

3. APPLICATION OF ARTIFICIAL NEURAL NETWORK IN ROBOTICS

15

3.1 Introduction

15

3.2 You Give The Answer

19

3.3 The Brain Figures Out The Answer On Its Own

19

4. NEURAL NETWORK IN MOBILE ROBOT MOTION

23

4.1 Abstract

23

4.2 Introduction

23

4.3 Neural Network In Robotics

25

4.4 Motion Planning Algorithm

27

4.5 How To Design A Neural Brain

29

5. CONCLUSION

32

6. ACKNOWLEDGEMENT

33

7. REFERENCE

34

1. INTRODUCTION
Character recognition is a trivial task for humans, but to make a computer program that
does character recognition is extremely difficult. Recognizing patterns is just one of those things
humans do well and computers dont. The main reason for this, I believe, is the many sources of
variability. Noise for example, consists of random changes to a pattern, particularly near the
edges and a character with much noise may be interpreted as a completely different character by
a computer program. Another source of confusion is the high level of abstraction; there are
thousands styles of type in common use and a character recognition program must recognize
most of these to be of any use.
There exists several different techniques for recognizing characters. One distinguishes
characters by the number of loops in a character and the direction of their concavities. Another
common technique uses backpropagation in a neural network and this paper will investigate how
good a neural network solves the character recognition problem.

2. APPLICATION OF ANN IN CHARACTER RECOGNITION

2.1 OBJECTIVES
The objective of this project is to create an easy to use environment in which the user can
draw characters and then let the program try to interpret the characters.
Such a program could be useful when demonstrating how character recognition using
backpropagation works or when demonstrating how many iterations of learning need to be
performed to get a satisfactory result.

2.2 USING BACKPROPAGATION IN A NEURAL NETWORK


Backpropagation is a technique discovered by Rumelhart, Hinton and Williams in 1986
and it is a supervised algorithm that learns by first computing the output using a feedforward
network, then calculating the error signal and propagating the error backwards through the
network

2.3 SOLVING THE PROBLEM


Creating a program that does real world character recognition would be too
comprehensive for a project of this size. It was therefore necessary to place some restrictions on
the application:
Reducing the resolution of each character to 15x9 pixels was necessary to decrease the
use of memory and learning time.
By letting the user draw characters directly in the program we eliminate the problem with
noise described earlier. Noise, if desirable, can be added by drawing arbitrary patterns around the
drawn character. Letting the user draw characters directly in the program also simplifies
demonstrations since a demonstration can be created on the fly without having to create input
files etc

2.4 INPUT DATA


As already mentioned, each character is represented by 15x9 pixels, forming an array of
135 input stimuli. Even though the resolution might seem small, it should be more than adequate
to learn the program 26 distinct characters (A-Z).

2.5 HIDDEN LAYER


There doesnt seem to exist a good rule for selecting the number of hidden units needed
to perform backprop successfully. Instead the user has to experiment with different number of
units to see what gives the most satisfactory result and best speed.
The program uses only one hidden layer since any advantage using more hidden layers
could not be observed with the learning- and test-data used in this project.

2.6 OUTPUT DATA


Since we are particularly interested in the categorization of the characters, we use
grandmother cells where one and only one output unit responds to a certain character. We are
only interested in upper-case letters and use 26 response units, one for each character. The output
unit with the highest activity will be selected as the character best matching the input data

2.7 THE PROGRAM


The program takes one argument on startup indicating the name of the brainfile to use. A
brainfile contains information about the network such as number of characters or patterns learned
so far, the number of hidden units to use, the learning rate and what error limit to use along with
information about the characters learned and the networks biases and weights. If any new
characters have been learned these will be added automatically to the brainfile when the user
exits the program.

Since the biases and weights are also stored in the brainfile the program will remember
which you characters you learned it in previous sessions. There is therefore no need to learn the
program every time you start it.

2.8 INTERPRETING A WORD

Fig 1. Interpreting a word


Figure 1 shows a screenshot of the main screen. The screen is dominated by 4 blue boxes
at the top of the screen in which the user can draw characters. The program accepts any words
ranging from 1 to 4 characters. When the user has entered a word it can be interpreted by the
program by pressing the Interpret button. The resulting string will be shown in the textbox below
the Interpret button. The resulting string will be shown in the textbox below the Interpret button.
The program interprets the input by running each character through a feedforward neural
network using the weights and biases it learned when performing backprop on the network of all
characters. The activity of a unit in the hidden layer is calculated using h=Sigmoid(b+Ss*w)
where Sigmoid is the function 1/(1+exp^-x), and similarly the activity of a unit in the response
layer; r=Sigmoid(b+Sh*v). The program then selects the response unit with the highest output
and displays its corresponding character.
Before any word or characters can be interpreted by the program the user must first learn
the program his/her handwriting by clicking on the Add characters button.

2.9 LEARNING CHARACTERS


Figure 2 shows a screenshot of the learning screen. New characters can be added to the
by drawing the character in the blue box on the left side, selecting which character it will map
onto and then pressing Add character. When done entering all new characters, pressing Learn
characters will make the program learn the characters in the database using backprop. The
program does this by first initializing all weights and biases with random values between -1.0
and 1.0. Then for each pattern it chooses randomly, it then computes the hidden and response
activities as described in the previous chapter before it backpropagates the error in the network.
The error in each response unit is calculated using douti=(Ti-ri)*ri(1-ri)*dt and then the error in
the hidden units are calculated using dhidi=Sum((kdoutk*vki*hi(1-hi)*dt). Finally the program
updates

the

weights

and

biases;

dci=douti*dt,

dbi=dhidi*dt,

dvij=douti*hj*dt

and

dwij=dhidi*sj*dt. The procedure of choosing a random pattern, performing feedforward and then
backprop continues until the total error of the network is less than the error limit or until the user
interrupts the learning. The total error of the network is calculated as the sum square error
between the target and the response value, e=(T-r). The error shown during the learning
procedure is the greatest error of all the patterns.

2.10RESULTS
Before the testing could begin, learning had to be performed on the network. There are
several problems to take into consideration when performing backprop on a neural network, such
as the number of hidden units to use, setting the learning rate etc.

2.11 SELECTING THE NUMBER OF HIDDEN UNITS


There is no known rule to choose the number of hidden units best suited for the data, so
some tests had to be performed to determine the best suited number. The figure to the right
shows the result from those experiments. 6 experiments for each set of hidden units were carried
out and the figure shows the mean value of those 6 experiments. The error did not settle down
when using 5 hidden units, not even after 10 experiments, and it was discarded. The error limit
for these experiments was 0.012.
MS-DOS programs have only access to 640Kb of memory so tests with more that 35
hidden units could not be performed. Based on the tests that could be carried out, 35 hidden units
turned out to give the best results, both in terms of speed and smallest number of iterations.

2.12 TRAINING PATTERNS AND THE LEARNING PROCEDURE


The 640Kb memory limitation for MS-DOS programs also put a limitation on the
maximum number of patterns the training set could contain and this was found to be 520, in
other words 20 patterns for each character in the English alphabet.
Using a learning rate of 0.8 and an error limit of 0.0003, the network needed over
650.000 iterations and over 3,5 hours to complete the task. The learning patterns are listed in
appendix 2.1

2.13 TESTING PATTERNS AND THE TESTING PROCEDURE


To test the network, 100 4-letter words were randomly selected from a dictionary to form
a total of 400 testing patterns. The words were drawn directly in the program and the programs
interpretation carefully noted.
The results were surprisingly good, almost 90% accuracy! Normally it will take a real
character recognition program years to get to this level of accuracy. So why is this program so
much better? The answer is simple; its not. We have to keep in mind that this program
eliminates many of the problems associated with character recognition programs, problems such
as noise and the problem of separating characters. Another problem is the learning- and testing
patterns; all characters were drawn by the same person and this will of course affect the results
significantly. If time had permitted it, I should have run tests with characters written by more
than one person.
Another thing that also helps on the good result is the speed of computers today.
Calculations that would have taken a week on a computer a few years ago only takes a couple of
hours on a Pentium today.

2.14SUGGESTED IMPROVEMENTS
Although the results were surprisingly good, I believe that the results could be improved
even further by implementing character scaling and centering.

2.15 CHARACTER SCALING


Since a neural network only receives data in form of input stimuli it cannot recognize
different sizes of a character without having to learn all possible sizes. By letting the program
scale characters this problem would be reduced to a minimum.

Fig 2.Character Scaling

2.16 CHARACTER CENTERING


Another problem appears when the user draws characters with a different alignment than
the one learned, i.e. drawing a left aligned I when all Is that have been learned are centered. This
can easily be solved by letting the program center all characters before they are stored in the
database and before interpreting them.

Fig 3.character centering

Fig. A1 - Training Set

Fig. A2 - Test Set

3. APPLICATION OF ARTIFICIAL NEURAL NETWORK IN


ROBOT KINEMATIC
3.1 INTRODUCTION
SIMPLE NEURAL NETWORK AS ROBOT BRAIN
This page gives a short introduction to the design of neural networks and how to use them
as a robot brain, as used in some projects on this website. Let's look at a simple neural network
consisting of only two inputs and two outputs.

Sensor1 sensor2 motor1 motor2

+1

+1

-1

-1

+1

-1

+1

-1

-1

+1

-1

+1

-1

-1

+1

+1

This is of course not a random table. Imagine the sensors are on the front of a robot
vehicle, and the motors drive this robot. If both sensors are pressed the robot has made a frontal
collision and should stop both motors. If the left front sensor is pressed the right motor has to
stop (thus the robot turns right), similar for the right front sensor. If none of the sensors is
pressed the motors should run.

Of course this table is simple enough to be programmed into the robot, but for the sake of
this discussion we'll put it into a neural network. We will treat each line in the table as a vector
(yes, you should have paid attention in math). This vector will have the format:
Input (sensor 1)
Input (sensor 2)
Output (motor 1)
Output (motor 2)

So:

+1
+1
-1
-1
A vector can be seen as a 'pathway' in the neural brain. The brain itself is a matrix (a
mathematical representation of all possible pathways between inputs, outputs and neurons), so in
order to turn a single vector into a matrix we'll have to multiply it with itself. For the first line of
the table this yields:
+1 +1 -1

-1

+1

+1 +1 -1 -1

+1

+1 +1 -1 -1

-1

-1 -1 +1 +1

-1

-1 -1 +1 +1

The brain should now have a 'pathway' which leads from 'both sensors pressed' to the
solution 'stop both motors'. In order to check this we first have to turn our question into a vector
again

+1
+1
?
?
The outputs in this vector are question marks, because that's the answer we'd like the
neural brain to give us. To get this answer we'll multiply the brain matrix with the question
vector. Ignore any calculation which has the question mark in it. So the first number of the
resulting vector would be: (1 x 1) + (1 x 1) + (-1 x ?) + (-1 x ?) = (1 x 1) + (1 x 1) = 2. For the
total matrix this would look like this:
+1 +1 -1 -1

+1

+2

+1 +1 -1 -1

+1

+2

-1 -1 +1 +1 x ? = -2
-1 -1 +1 +1

-2

Now these results need to be normalized, which means anything larger than zero becomes
+1 and anything smaller than zero becomes -1. This results in:
+1
+1
-1
-1
And if we compare this result with the vector layout (so we remember what is what):
Input (sensor 1) +1
Input (sensor 2) +1
Output (motor 1) -1
Output (motor 2) -1
We can see that the neural brain just told us to turn off the motors (both get -1 as answer),
which is correct. So the neural brain has correctly remembered that is both sensors are activated

both motors should stop. This is quite remarkable for such a simple matrix. The fact that the
calculated inputs (+1/+1) match the inputs from the question means that this is a valid answers as
far as the neural network is concerned. If this wouldn't match we know we've come across
something the brain hasn't learned yet! We could do the same thing for the second line of the
table, which would result in:
+1 -1 +1 -1
-1 +1 -1 +1
+1 -1 +1 -1
-1 +1 -1 +1
This new matrix can be added to our original neural brain by simply adding the numbers
together, which results in:

2 0 0 -2
0 2 -2 0
0 -2 2 0
-2 0 0 2
By doing this the original brain has in fact 'learned' a new question and answer (if the left
sensor is pushed and the right sensor is not, then the left motor should run and the left motor
should not). You can keep on doing that for each line in the table and eventually you'll end up
with a finished neural brain which can answer all questions you could possibly ask regarding the
two sensors. The completed brain by the way looks like this:
4 0 0 -4
0 4 -4 0
0 -4 4 0
-4 0 0 4

A lot of people use a reduced size matrix (in this case a 2x2 rather than a 4x4). I prefer
the large size because it's much easier to work with and it allows you to find the question for an
answer. If for example you wonder what you would have to to to get both motors running simple
use the vector (? ? 1 1) as input and you'll get the right answer.
In this explanation we have started with a table, and added all questions and answers to
the neural brain. This could be done because we know all questions, and perhaps more important,
we know all answers. In real life the questions may not be so obvious, and the answers unknown.
In that case you would start with an empty neural brain (consisting of all zero's). Now there are
basically two ways for this empty brain to learn:

3.2. YOU GIVE THE ANSWERS.


Every time the sensors reach a state the brain has never seen before it stops and asks you
to give the correct answer for that given sensor state. For example both inputs would be off and
the neural brain asks you what to do in a case like that. You tell it to switch on both motors, and
the brain adds this information (if both sensors are off both motors must be on) as a vector to it's
knowledge. This is called training the neural brain and it can be done if you know all the answers
(for example when you train the brain to recognize the letters of the alphabet (for the sake of
argument we'll assume that you can read)). This is the fastest way to teach a neural brain, and it
can be compared with a child going to school.

3.3. THE BRAIN FIGURES OUT THE ANSWERS ON ITS OWN.


Every time the sensors reach a state the brain has never seen before it selects a random
answer, and tries that. It it proves to be successful the question/answer combination is added to
the brain as a vector. If it is not successful it will randomly select another answer and repeat that
procedure until it finds a correct one. This can be done if you can provide a way of checking
whether the answer selected is actually correct. This is more difficult than you might expect
because very often it is not easy for a brain to tell whether the answer is right or not, and if it
guesses wrong there may not be a second chance (such as when your robot decides to drive out
of the 10th floor window). This is the slowest way of learning something, but the advantage is

that it can be done unsupervised. You can compare this with giving a box of Lego's to a child
and wait to see what it does with it. It depends on the situation which method you select.
Until now you had to actually program your robots, but this is no longer necessary. This
robot, if left alone, will program itself! If you want to understand how the neural brain of this
robot works I suggest you read the page on neural networks first. Of course you don't need to
understand it, simply put it on the floor and see what happens. The bumper design still poses me
difficulties, it is possible for the robot to get stuck (even though it is performing the right actions)
simply because the bumpers are stuck.

Fig 4. Picture of the neural robot


This robot is equipped with a self learning neural network. The network consists of two
neurons, two inputs (sensors) and two outputs (motors). Due to the way the brain matrix is set up
the motors and sensors act as neurons themselves too, so if you want to be precise the network
has 6 neurons, hence the 16 values in the matrix. It is however called a two neuron network
because there are two 'units' that act as neurons only. From a control point of view the
connections are:

Fig 5.Virtual Connections


In reality each unit is connected to each other unit. The neural network is actually in the
PC because programming it in the Cyber master may very well be possible but I thought that was
much to much work. Besides if you want to expand the limited intelligence of this particular
model you'll have to use the power of the PC anyway.
The problem with a self learning robot is of course that it has to learn something. As soon
as an unknown input combination is encountered the brain doesn't know what to do. So it starts
to generate random solutions (random on/off states for the motors). Each one is tried, and if it is
successful the current motor status is added to the brain as solution (so in fact: learned). If the
input combination is known (it has learned this before) it can of course take the appropriate
action immediately. The flow of the program is as shown here.

Fig 6. Program flow

4. NEURAL NETWORKS IN MOBILE ROBOT MOTION


4.1 ABSTRACT
This chapter deals with a path planning and intelligent control of an autonomous robot
which should move safely in partially structured environment. This environment may involve
any number of obstacles of arbitrary shape and size; some of them are allowed to move. We
describe our approach to solving the motion-planning problem in mobile robot control using
neural networks-based technique. Our method of the construction of a collision-free path for
moving robot among obstacles is based on two neural networks. The first neural network is used
to determine the free space using ultrasound range finder data. The second neural network
finds a safe direction for the next robot section of the path in the workspace while avoiding the
nearest obstacles. Simulation examples of generated path with proposed techniques will be
presented.

4.2 INTRODUCTION
Over the last few years, a number of studies were reported concerning a machine
learning, and how it has been applied to help mobile robots to improve their operational
capabilities. One of the most important issues in the design and development of intelligent
mobile system is the navigation problem. This consists of the ability of a mobile robot to plan
and execute collisionfree motions within its environment. However, this environment may be
imprecise, vast, dynamical and either partially or non-structured. Robots must be able to
understand the structure of this environment. To reach their targets without collisions, the robots
must be endowed with perception, data processing, recognition, learning, reasoning, interpreting,
decision-making and action capacities. The ability to acquire these faculties to treat and transmit
knowledge constitutes the key of a certain kind of artificial intelligence. Reproduce this kind of
intelligence is, up to now, a human ambition in the construction and development of intelligent
machines, and particularly autonomous mobile robots. To reach a reasonable degree of autonomy
two basic requirements are sensing and reasoning. The former is provided by onboard sensory
system that gathers information about robot with respect to the surrounding scene. The later is
accomplished by devising algorithms that exploit this information in order to generate

appropriate commands for robot. And with this algorithm we will deal in this paper. We report
on the objective of the motion planning problem well known in robotics. Given an object with an
initial location and orientation, a goal location and orientation, and a set of obstacles located in
workspace, the problem is to find a continuous path from the initial position to the goal position,
which avoids collisions with obstacles along the way. In other words, the motion planning
problem is divided into two sub-problems, called Findspace and Findpath problem. For
related approaches to the motion planning problem see reference (Latombe, J.C. 1991). The
findspace problem is construction the configuration space of a given object and some obstacles.
The findpath problem is in determining a collision-free path from a given start location to a goal
location for a robot. Various methods for representing the configuration space have been
proposed to solve the findpath problem. The major difficulties in the configuration space
approach are: expensive computation is required to create the configuration space from the robot
shape and the obstacles and the number of searching steps increases exponentially with the
number of nodes. Thus, there is a motivation to investigate the use of parallel algorithms for
solving these problems, which has the potential for much increased speed of calculations. A
neural network is a massive system of parallel distributed processing elements connected in a
graph topology. Several researchers have tried to use neural networks techniques for solving the
find path problem.with translational and rotational motion. Our approach basically consists of
two neural networks to solve the findspace and findpath problems respectively. The first neural
network is a modified principal component analysis network, which is used to determine the
free space from ultrasound range finder data. Moving robot is modeled as a two-dimensional
object in this workspace. The second one is a multilayer perceptron, which is used to find a safe
direction for the next robot step on the collision-free path in the workspace from start
configuration to a goal configuration while avoiding the obstacles. The organization of the paper
is as follows: section 2 brings out the briefly description of neural network applications in
robotics. Our approach to solving the robot motion problem is given in section 3. Our method of
motion planning strategy, which depends in using two neural networks for solving the findspace
problem and the findpath problem respectively will be described in section 4. Simulation results
will be included in section 5. Section 6 will summarize our conclusions and gives the notes for
our further research in this area.

4.3 NEURAL NETWORKS IN ROBOTICS

Fig 7.The Scanning Range Finder


The interest in neural network stems from the wish of understanding principles leading in
some manner to the comprehension of the basic human brain functions, and to building the
machines that are able to perform complex tasks. Essentially, neural network deal with cognitive
tasks such as learning, adaptation, generalization and optimization. Indeed, recognition, learning,
decision-making and action constitute the principal navigation problems. To solve these
problems fuzzy logic and neural networks are used. They improve the learning and adaptation
capabilities related to variations in the environment where information is qualitative, inaccurate,
uncertain or incomplete. The processing of imprecise or noisy data by the neural networks is
more efficient than classical techniques because neural networks are highly tolerant to noises. A
neural network is a massive system of parallel distributed processing elements (neurons)
connected in a graph topology. Learning in the neural network can be supervised or
unsupervised. Supervised learning uses classified pattern information, while unsupervised
learning uses only minimum information without preclassification. Unsupervised learning
algorithms offer less computational complexity and less accuracy than supervised learning
algorithms. Then former learn rapidly, often on a single pass of noisy data. The neural network
could express the knowledge implicitly in the weights, after learning. A mathematical expression

of a widely accepted approximation of the Hebbian learning rule is w (t 1) w (t) x (t) y (t) ij ij i j
+ = + (1) where xi and yj are the output values of neurons i and j, respectively, which are
connected by the synapse wij and is the learning rate (note that xi is the input to the synapse).
Survey of types, architectures, basic algorithms and problems that may be solved using some
type of neural networks. The applications of neural networks for classification and pattern
recognition are good known. Some interesting solutions to problems of classification in the robot
navigation domain were succesfully solved by means of competitive type of neural networks.
Using of competitive neural networks in control and trajectory generation for robots we may find
in the book as well as using of neural network for sensor data processing in map updating and
learning of the robot trajectories. For the obstacle avoidance purposes recurrent type of neural
network was used with the gradient back-propagation technique for training the network . 1991).
The using of upervised ural network for robot navigation in partially known environment is
presented in (Chochra 1997). An interesant solution with using of Jordan architecture of neural
network is described in (Tani, J. 1996). Here the robot learns internal model of the environment
by recurrent neural network, it predicts succession of sensors inputs and on the base of the model
it generates navigation steps as a motor commands. The solution of the minimum path problem
with two recurentneural networks is given in (Wang 1998). Solutions that use the learning
ability of the neural network with fuzzy logic for representation of the human knowledge applied
to robot navigation also exists see . The complex view for solution of the navigation problem of
the autonomous vehicles gives . Team of researches CMU here presents results from designing
of autonomous terrain vehicle. For learning the path from vision system data and for obstacle
avoidances algorithms using laser range finder data and different types of neural networks. Our
first work concerned the using neural networks for object classification in the map of the robot
environment was using the cluster analysis with range finder data. This acquiring knowledge we
extend for using neural network in the algorithm of the robot motion planning.

4.4 MOTION PLANNING ALGORITHM

Fig 8. Neural Network In Robotics


Philosophy of our algorithm appear from motion of human in the environment when he is
moving between obstacles on the base of his eyes view and he make already the next step to the
goal in the free space. Analogically, our robot will move safely in environment on the base of the
data visible with scanning ultrasound range finder . First must mapping the workspace from
measured data and find the free space for robot motion and then determines the next robot
azimuth for the safe step to the goal. For the solution of this problems we use neural networks
technique. We use the measured range finder data in the learning workspace for mapping the
front robot workspace by the first neural network finding the free space segment. This segment is
used as an input to the second neural network both with the goal location, which is used to
determine the direction of the proposed next navigation step for moving the robot. The algorithm
is of an iterative type. In each iteration, the last orientation of the moving robot is stored and the
neural network selects the direction of the next navigation step. To determine the direction, the
status in the partial configuration space is required; the map from range finder is proposed to
give this status. Moreover, a control unit is used to provide information required by neural

networks to control the operating sequence and to check the reachability of the goal
configuration from the start configuration. Our motion planning algorithm can be summarized as
follows:
1. Specify the object, environment information and the start and goal configurations.
2. Set the current object orientation equal to the goal orientation.
3. Activate range finder via control unit to determine the local part of the map of the
work space.
4. Initialize the first neural network, which will use the measured data from range finder.
The neural network is iterated until the weights and the outputs converged to the returned one
free space segment.
5. Activate the 2nd neural network. It returns the direction k of next robot motion step.
6. Generate the robot motion path in the direction k and go to the step 3.

4.5 HOW TO DESIGN A NEURAL BRAIN


Designing and using neural brains is an art and not a science. Even though the brain itself
is fixed getting the proper data configuration and variable settings is still difficult and requires
trail and error (do play around with the software to get a feel for this). The following instructions
describe the process, and a working example is given at the end.
1. Determine what you want the robot to do.
2. Determine the number and types of input you require.
3. Determine the number of outputs you require.
4. Map inputs and outputs to a vector (both must be in the same vector). If you have less
inputs and outputs than points in the vector spread them evenly (you may need to
experiment with this to get the best results).
5. Make a complete set as described in point 4 for all possible combinations of inputs and
outputs. These are all the combinations that the network will have to learn. Note that
you personally don't need to have any clue what the possible relationships are between
inputs and outputs, as long as you know which go together. Use the program to enter
the vectors and save them to disk one by one (this is a Qbasic source). Note that this
program represents the vector as a 5x5 square block. It is never the less still a vector,
don't confuse this with a real matrix. Then use the copy command (in DOS: copy *.pat
train.ful) to combine the vectors to an input file called for exampletrain.ful. This is the
training file. Delete the vector files.
6. Repeat step 5 but this time only put the inputs in the vectors. The file you create this
way is the test file called test.ful.
7. Now play around with the different variables in and try to find the most effective
combination. For this you use the training file to train the network, and the test file to
test the effectiveness. If you're not satisfied with the results modify the variables again
and repeat the training and testing untill you get good results. If this fails too you may
need to change the way the inputs and outputs are distributed over the vectors. If all
failes you may have a problem at hand that cannot be solved with this kind of neural
network.

Fig 9. Motion By The Door

Fig. 10. Simulation Of Robot Motion

Fig. 11. Robot Motion From Corridor To The Room

Fig 12. Robot Path In Unknown Environment

5.CONCLUSION
Neural network and backpropagation seem to be well suited for character recognition.
Many of the problems that arise when using backpropagation to recognize characters can easily
be eliminated or reduced by adding routines that scales, centers etc. Other problems like noise
and the problem of separating character also exists for the other methods of character recognition
and there is not much one can do about those problems except improving the quality of the
equipment used when scanning/reading characters (i.e. use higher resolution) or using a database
or artificial intelligence to give more accurate interpretations.
Many say that the slow speed of the backpropagation routine is a major drawback, but I
consider this to be a minor drawback only since the speed of new computers doubles every third
year and then speed becomes a less important issue than it is today.

ACKNOWLEDGEMENT
-

I acknowledge my gratitude and thank to Mr.R.P.Singh sir for giving me opportunity to avail all
the best facilities available at through which I have gained practical knowledge in the
environment suitable for harmonic adjustment.

I am also grateful to the Mr.G.A.Kulkarni Sir H.O.D. E.& C. Department and other staff there
for much needed and valuable help rendered by them during the case study.

Finally, a deep thanks to the Mr. G.U.Patil and other faculty members of Department of
Electronics and Communication, friends and my family members for their constant guidance and
encouragement in case study.

- Mit Pandya
-

(T.E E&C)

Das könnte Ihnen auch gefallen