Sie sind auf Seite 1von 7

Session 1532

Constructing a Wall-Follower Robot for a Senior Design Project

Daniel Pack, Scott Stefanov, George York, and Pam Neal


DFEE/US Air Force Academy

ABSTRACT--A good senior design project should incorporate both the breadth and the depth of
knowledge a student has acquired throughout the undergraduate curriculum. Construction of an autonomous
wall-follower robot accomplishes this goal well. This particular senior project, currently underway at the USAF
Academy, emphasizes both hardware design and software development. The objective of the project is to
design a robot, with a human like navigational “intelligence,” which maneuvers within a maze to reach a
designated target position. To do so, the robot must contain both a high-level reasoning module and a low-
level motion control module. In addition, both modules must work together cooperatively to execute the
desired task. Construction of the two modules requires software development as well as a complete system
design using mechanical parts, circuits, and a microprocessor. For a successful end product, each team,
consisting of two students, must give careful consideration to the various design trade-offs. As a result of the
project, each student will gain engineering confidence and develop critical and analytical thinking skills.

I. INTRODUCTION

The study of robots has received a considerable amount of attention in the past two decades. The term
“robot” was mentioned in the literature as early as the 1920’s. It was, however, not until the late 1970’s that a
community of scholars dedicated in this subject emerged. The field of robotics is unique in that it incorporates
multiple disciplines: specialties include computer science, mechanical engineering, electrical engineering,
physics, and mathematics, to name a few. Due to this interdisciplinary nature, constructing a robot requires
understanding of various facets of the aforementioned areas of expertise.

By the early 80’s the technology had matured enough to produce robots with reasonable size, weight,
and capability. Robots can now be easily found in the manufacturing sector of industry. Conventional robots,
however, still perform repetitive tasks without much “intelligence.” Researchers around the globe now seek
ways to embed intelligence in robots in order to perform complex tasks. Currently, specific tasks can be
executed by robots using AI tools, but no universal system exists that can truly “think.” Therefore,
construction of even a simple robot for a task such as navigation within a maze can be challenging and
educational. With this in mind we have proposed building a wall-follower robot as a senior design project for
our EE department seniors. The project was one of many interesting projects (approximately 20) students can
choose from, and two seniors decided to take the challenge. We initially wanted to have more than a single
team to encourage efficient design through a competition among project design teams, but we believe we can
still learn valuable lessons from the single team experience. The team currently has a working prototype which
can control the motion of the wheels while an Infrared (IR) system monitors for wall detection. Eventually,
Page 1.121.1

1996 ASEE Annual Conference Proceedings


multiple IR transmitter/receiver pairs will be installed to detect the robot’s environment.

The senior design course is a single semester endeavor. As a result, we have decided to provide the
design team a pre-fabricated mobile robot platform made by AK Peters Corporation[1]. The platform includes
the robot body, wheels, two motors and gear boxes1. We believe this will help them avoid spending too much
time designing and revising the robot body. Fig. 1 shows a sketch of the robot the design team is currently
working on. A caster wheel is attached at the back of the platform to provide stability of the robot system. The
actual senior design project took place during the Spring/96 semester. As a final project of another lab class,
however, the project team had completed the initial part of the senior project.

Since the goal of this project is to create a mobile robot with navigational intelligence, the project team
must design motor control circuitry, sensing and interface circuitry, and associated control software. In addition,
the team must design an embedded micro-controller board for the overall control of the robot. We decided to
use the 68HC11 Motorola micro-processor as the “brain” of the system since the students are already familiar
with this device from other courses.

caster wheel
circuit board
gearboxes

wheels
IR transmitter/detector

(a) Top View (a) Front View


Figure 1
Mobile robot body

To accomplish the goal, the team must execute the following tasks:
1. Develop and implement both the hardware and software design for controlling robot motion.
2. Select and implement sensory devices and devise software to extract information from them.
3. Create algorithms for the robot to learn its environment. (The robot will be tested in a maze with walls that
can be relocated easily to alter the robot environment. For the final test run, the robot will be given a period of
time to learn its current environment before it is told to navigate to a designated location.)
4. Incorporate the above three tasks to achieve the set goal. This means coming up with a control architecture
to sense the robot environment, plan its action based on the sensed information, and execute the planned
motion.

1
We hasten to add that we do not provide the micro-chip control board the same company manufactures to students. Design of the
Page 1.121.2

control board is a part of the project students must implement.

1996 ASEE Annual Conference Proceedings


As the team struggles to accomplish the above four tasks, the students will gain knowledge in decision
making skills based on data analysis (during sensory selection process and during the development of the
overall control architecture) and empirical experience that cannot be gained through simulation studies. This
includes how to deal with noise and non-ideal characteristics of devices. Also, students will learn to work
together as a team which can be a valuable lesson for them as they graduate and join the engineering work
force.

The remainder of this paper is organized in the following way. We start in section II with a brief
description of the robot’s environment--the maze. In section III, we present the motion control module of the
robot which uses a pulse-width modulation scheme to move the robot forward, backward, and turning.
Section III describes how an IR system is implemented to detect walls of the maze, followed by a description
on the “brain” part of the robot in section V. In this section, we discuss schemes the robot can use to learn
about its environment. Finally, summary and future study concludes the paper.

II. Robot Environment

We have created a large portion of the maze to serve as the environment in which to test the robot’s
human-navigation intelligence. The goal in creating the maze was to provide flexibility into the robot
environment. We wanted to start with a simple maze to test primitive control of the mobile robot and its low-
level intelligence, yet still have the capability to increase the complexity of the maze to fully test the limitations
of the robot’s intelligence. Our maze can be configured as a one-floor, simple maze, or can be modified into a
three-dimensional, multiple-floor maze. A single floor maze is 12’ by 8’, and is composed of six 4’ by 4’
segments (Fig. 2). These maze segments can be stacked to form two, three, and six floor mazes. The robot
can travel between floors using ramps.

8’

12’
1’

a. Maze Dimensions b. Maze Segment

Figure 2
Maze Environment

III. Motion Control Method

The walls of the maze can be modified into any configuration. Initially, we are keeping the maze course
simple, and the walls made of a uniform material, wood. To add complexity, we can increase the difficulty of
the maze course and also randomly change the material of some wall segments. By changing the material of the
walls we can test the robustness of the robot sensors, and the robots ability to adapt to a changing environment.
Page 1.121.3

1996 ASEE Annual Conference Proceedings


The robot must have at least three different types of motion to move about its environment. It must be
able to move forward, backward, and turn. The three types of motion are generated by controlling angular
velocities of two motors attached on the robot body platform. This results in motor speed control with a
Pulse-Width-Modulation scheme. For readers who may not be familiar with this scheme we briefly describe
the nature of pulse-width modulation.

Switch
1 Period

Vs 50%
Motor

time

Figure 3 Figure 4
A simple motor driving system 50% Pulse Train

Suppose that we are controlling a motor by connecting a voltage source to the motor as shown in
Fig. 3. If we close the switch, the motor will start to rotate. Now imagine what will happen if we open and
close the switch and repeat this process rapidly. The motor speed will be proportional to the time the switch
is closed. The underlying principle for the pulse-width-modulation scheme is identical to the scenario we just
described. That is, by controlling the time during which the switch is on, the motor speed can be controlled.
The amount of fractional time period when the switch is on is referred as the duty cycle. A duty cycle of 100%
means that the switch is closed at all times, while 0% duty cycle means that the switch is open at all times.
Fig. 4 shows a 50% duty-cycle pulse train.

The two motors are individually controlled


using a proportional control method2 and combined
together to synchronize the speed of two motors. t
For each motor, the project team has installed an Encoder pulse train
encoder system which provides the velocity
information for each wheel. This information is fed
back to the 68HC11 micro-processor where it is used Motor 68HC11 Board
to alter the desired velocity command for each motor.
Fig. 5 shows the feedback mechanism implemented in Motor Interface
the mobile robot. Chip

Figure 5
Motor control feedback loop

2
The command to each motor is proportional to the error
Page 1.121.4

between the desired and the actual angular motor velocities.

1996 ASEE Annual Conference Proceedings


The 68HC11 chip has built-in I/O ports that can easily be used to implement the control scheme. Encoder. For
the implementation, PORTA of the board is selected for both encoder inputs and motor command outputs.
The input capture and the output compare registers of 68HC11 chip along with the free-running timer register
are used to control the duty cycle based on the information acquired from the encoder.

Three different movements can then be easily produced by controlling the duty cycle of the command
pulse train to each motor. For example, to move the robot forward, the duty cycles of the pulse trains to both
motors must be same. Due to the difference in the rolling resistance of each drive wheel, sending command
pulse trains with the same duty cycle to the two motors will not alone guarantee a straight motion. The
encoder feedback system allows remedy of the situation by counting the number of ticks that the detector
receives for a specified period of time. Since there is a set number of ticks per revolution of the wheel, the
feedback information is sufficient to relay the message to the micro-processor. At present, the student team
has implemented both forward and turning motion. To move backward, the same scheme for the forward
motion can be applied provided the voltage polarities to the motors are reversed. For the turning motion,
opposite polarities are applied to two motors. This causes one wheel to move forward while the other
moves backward, resulting in a turning motion.

IV. Infrared Sensing System

The motion control mechanism will not do any good without a sensory system that senses the
environment in which the robot is roaming. We discuss this issue in the current section.

In order for the mobile robot to navigate and learn the maze (and avoid running into walls) it needs to
have some sort of a sensor system. This sensor system will provide the robot with distance information, thus
enabling it to stay a reasonable distance from the walls as well as determine where the openings are. This
information, when fed into the micro-controller system, will help the robot to properly learn the maze.

Though our students may design any type of sensor system they wish, we suggested they use one or
more infrared sensors. Infrared sensors are a relatively inexpensive and fast way of providing the information
required. Touch sensors are another possibility, but this approach would be less efficient (the sensor would
have to come in contact with each wall) and certainly less interesting from a design point of view. A system
of IR sensors would provide the robot with wall proximity information without requiring the robot to come
in contact with walls.

A single infrared sensor subsystem consists of an IR EMITTER Wall


emitter, detector and associated amplifiers. A block
diagram is shown in Fig 6. In the presence of an Amplifier and
Current to
obstruction, such as a maze wall, some of the emitted Voltage
energy will be reflected off a wall and enter the Converter
DETECTOR
detector. Current through the detector increases as
more energy is detected. This current is then To 68HC11
amplified and converted to a voltage. The resulting Control Board
voltage is fed to the A/D converter in the 68HC11.
Figure 6 Infrared Sensor Subsystem
Page 1.121.5

1996 ASEE Annual Conference Proceedings


The digitized voltage provides both the motion control system and the navigation system with distance
information. The students will learn that in the real world trade-offs must be made. The more power the
emitter puts out, the more robust the detection system. However, this robot runs on batteries so power is
limited. They will also have to consider the material and reflective quality of the maze walls. While their
sensor may work under ideal conditions, it may not work in the real world (the maze). The students will also
have to consider physical factors such as proximity of the detector to the emitter. They will likely have to put
a shield around the detector to avoid saturation by the direct radiation of the emitter.

Currently, the project team has implemented an IR system which can not only determine whether a wall
exists in front of the robot but also measure the distance up to approximately 2 feet. Once the sensing system
is completed, the information will be used by the navigation system, which is discussed next.

V. Navigational Algorithm

Mazes are conceptually easy for the student. They are an early life experience for many and can be
considered easy, at first glance. Once the difficulties of encoding maze solution into an algorithm become
apparent, there are several layers of complexity that will provide for functional solutions from simple to elegant.

This design project affords the robot the opportunity to learn about the maze and store this knowledge.
The ability to learn and use the information gained is a fundamental concept of artificial intelligence. There are
many methods of knowledge representation available, but the one we use initially involves node analysis and
depth-first searches to find a designated location. In the remainder of our discussion, we will use an exit as the
desired final robot location. The robot will initially travel about the maze and record information in the form
of node relationships.

The asterisks label all of the possible nodes


for this maze (Fig. 7). They are identified by every Exit * * *
end point, corner, and intersection. Once all the
nodes traveled have been recorded, as long as one of *
them is at the exit, a depth first analysis will yield a
solution. The easiest way to find an exit of a maze is * * * *
to follow the right or left wall from the current
position. This technique is, however, limited to * *
mazes with one exit, one level, and no “islands”
(encircular loops). The first maze design will have
only one level and one exit, but can have circular Figure 7 Minimal size 4x4 maze
loops. These islands will force the robot to abandon Includes island, dead ends, and single exit
the aforementioned simple strategy for a more
complex one.

The maze also incorporates dead space (area that cannot be traveled) and dead ends. No matter where
you start from, an exit solution can be derived by going to the nearest node and describing a path from node to
node to exit. Each node will contain a vector with direction and magnitude to the next node towards the exit.
The direction will be coded as North, South, East, or West, but these names will be only relative to initial
(North) starting direction since the robot will not have a compass. The magnitude will be an absolute distance.
This information is all that is required to prove the existence of circles and to avoid them on the way to the exit.
Page 1.121.6

1996 ASEE Annual Conference Proceedings


Building intelligence into a robot is a non-trivial exercise, but one worth the effort to demonstrate the
vast difference between a random wall bump-and-turn strategy and one that uses sensors to learn, memory to
store information, and logic to manipulate itself about a limited world. The different levels of learning,
determining position at the random start, and exit solution optimization available to the designer provide a
large range of possible solutions.

VI. Conclusion

In this paper we presented the process of constructing a wall-follower robot as a senior design project
for EE undergraduate students. The task was shown to be suitable since it contains both hardware and
software designs as well as implementation. We illustrated that for successful accomplishment of the project,
participating students must design and implement a motor control scheme, an IR sensing system, and
navigational algorithms. In addition, the three systems must be incorporated in an overall control architecture.
As a result of conducting the project, we expect students to gain critical decision making skills along with
obtaining analysis skills. We plan to report detailed findings of our study at the upcoming conference.

REFERENCE

[1] Joseph L. Jones, “The Mobile Robot Assembly Guide,” A K Peters, Ltd., Weslley, MA 02181.

AUTHORS

DANIEL PACK, Assistant professor, US Air force Academy. BSEE (1988) Arizona State University, MS
Engineering Sciences (1990) Harvard University, and Ph.D. EE (1995) Purdue University. He is a member
of Eta Kappa Nu (Electrical Engineering Honorary), Tau Beta Pi (Engineering Honorary), IEEE, and ASEE.
His interest in research includes, intelligent control, robot vision, and walking robots.

SCOTT STEFANOV, Captain, USAF: BSEE (1985) Worcester Polytechnic Institute. MSEE (1991) Univ.
of Dayton. He was an avionics cockpit engineer at the USAF Wright Laboratories from ‘85-’90. He is an Air
Force Pilot flying the C-130E and T-3/A. Research areas include: Computer intelligence, Computer Graphics,
and Multiprocessor Architectures. He is an Electrical Engineering Instructor at the US Air Force Academy.

GEORGE YORK, Captain, USAF: BSEE (1986) US Air Force Academy. MSEE (1988) University of
Washington. he developed guidance computers for missiles at the USAF Wright Laboratories from ‘88-’92.
He then served two years as an exchange engineer at the Korean Agency for Defense Development.
Currently, he is teaching Microcomputer System design courses at the US Air force Academy.

PAM NEAL, Captain, USAF: BSEE (1986) Utah State university. MSEE (1992) University of
Washington. She was a test engineer for early GPS satellites from ‘86-’90. She is currently teaching
Microcomputer Programming and Computer Architecture courses at the US Air Force Academy.
Page 1.121.7

1996 ASEE Annual Conference Proceedings

Das könnte Ihnen auch gefallen