Sie sind auf Seite 1von 239

PROGRAMMING ROBOTS Useful robot algorithms in both pseudocode and source code.

Because programming is a very huge subject and there are billions of books and tutorials on how to program already written, all I plan to cover is specifically what is important to programming robots not mentioned in common literature. ATMEGA BOOTLOADER TUTORIAL

Before starting this tutorial, please read my bootloading tutorial (coming soon!) and my UART tutorial! You must also already have a UART connection set up on your robot for this to work: $50 Robot UART tutorial. As you should know, when I finish my other bootloading tutorial, is that bootloaders are software that can replace your hardware programmer. Instead of hooking up a programmer, you can program using a serial connection. You will need a programmer to upload the bootloader, but you won't ever need the programmer again, except for maybe programming fuses or lockbits. Now that you understand what a bootloader is and the benefits for one, now I will demonstrate how to install a bootloader onto your $50 Robot or any other robot with an ATmega microcontroller. We will be using a bootloader that has an auto-baud feature, where your microcontroller will attempt to reconfigure its internal baud settings to ensure proper bootloader configuration. This does not mean your other hardware will auto-configure, soooo . . . Important: Make sure that all your external hardware is set to the same exact baud rate or this will not work! Just for reference, the bootloader I selected is the open sourcefast tiny & mega UART bootloader. This bootloader is a bit buggy, comes with zero documentation, and not much in comments in the source code . . . but it's the best I've found that can handle a wide range of ATmega types. I've made some small config changes to it for use on the$50 Robot, and those files will be downloadable later in this tutorial.

Configure BAUD (if you haven't already done so) Click: Start->Settings->Control Panel->System A new window will come up called 'System Properties'. Open the Hardware tab and click device manager. You should see this:

Go to Ports, select the one you are using, and right click it. Select Properties. A new window should come up, and select the Port Settings tab:

Now configure the settings as you like, as described in the UART tutorial.

Upload Bootloader You have two options here. You can either use my precompiled bootloader: ATmega8 Bootloader hex file ATmega168 Bootloader hex file Axon USB Bootloader hex file beta v2.1 Bootloader files and then upload it using AVR Studio. Or you can custom modify it for your specific setup/ATmega. In the following steps I'll explain both. update: new bootloader software is available

Programming your own Bootloader

Note: If you do not plan to modify the bootloader code, you may skip this step. Open up AVR Studio, and in the Project Wizard start a new project called bootloader:

Make sure you select 'Atmel AVR Assembler' since we will be programming in Assembly. Don't worry, its mostly done for you already. You do not need to program in Assembly to write a bootloader, but the particular bootloader we are using is written in that language, and so we must compile for it. Click Finish, and the new project should load up.

Install Files Now, download this zip file, and unzip it into your bootloader directory: Bootloader Source Files (v1.9) Bootloader Source Files (v2.1) note: this tutorial teaches only v1.9, but v2.1 is better Now you must also put your own robot program .hex into the bootloader file as well. For example, suppose you just modified your own custom photovore code and you compiled it. Take that compiled .hex and place it into your bootloader folder. Don't forget to do this every time you change your custom code! Optional: Compile Code Note: If you do not plan to modify the bootloader code, you may skip this step.

Look for a file that matches the microcontroller you are using. For example, if you are using the ATmega168, look for the file M168.asm. Open that file up, and copy/paste the contents into your bootloader.asm that is already open in AVR Studio. Now looking at the datasheet of your microcontroller (pin out section), verify that the Tx and Rx pins are correct in bootloader.asm. This is an important step, and in rare cases can break something if you skip this step!!! Make any changes as needed. For example, this is what it should look like for both the ATmega8 and ATmega168:
.equ .equ .equ .equ .equ .equ STX_PORT STX_DDR STX SRX_PIN SRX_PORT SRX = PORTD = DDRD = PD1 = PIND = PORTD = PD0

Now compile it by pressing build:

Upload Code to ATmega Now that you have your new custom bootloader .hex file, you need to simply upload that to your microcontroller. Use your hardware programmer like you have always done:

And finally, you need to program a fuse to tell it to use the bootloader. IMPORTANT: If you change the wrong fuse you can possibly destroy your ATmega! Don't change any other fuses unless you know what you are doing! You want to set BOOTRST to 0 by checking this box, and then pushing Program:

Your Bootloader is Uploaded and Ready! Now disconnect your programmer cable. You won't be needing that again! You will need to power cycle your microcontroller (turn it off then on again) after uploading your bootloader for the settings to take effect.

Upload YOUR Program Through UART update 2010: A GUI version of the bootloader can be found on the Axon II setup tutorial. Now open up a command prompt by going to start->Run...

and typing in 'cmd' and pushing ok:

A new command prompt should open up. Using the command 'cd', go into the directory of your bootloader files. See below image for an example. With your robot turned off and UART ready to go, type in this command: fboot17.exe -b38400 -c1 -pfile.hex -vfile.hex 38400 is your desired baud (9600, 38400, 115200, etc) c1 is your com port (c1, c2, c3, etc) 'file' is the name of your program you want uploaded. The filename MUST be 8 characters or less or it will not work (a bug in the software), and the file must be located in the same folder as fboot.exe. For example, if photovore.hex was your file, do this: -pphotovore.hex -vphotovore.hex (yes, you need to say it twice, with p for the first time and v for the second time) Press enter, and now you will see a / symbol spinning. Turn on your robot, and it should now upload. This is what you should see upon a successful bootload:

For some unexplained reason, I occasionally get an error that says: Bootloader VFFFFFFFF.FF Error, wrong device informations If you get this error, just repeat this step again and it should work. note: after typing in a command once into the command prompt, you do not need to type it again. Just push the up arrow key to cycle through previously typed commands.

The Bootloader Didn't Work?! What if I did the tutorial but its still not working/connecting? Chances are you missed a step. Go back to the beginning and make sure you did everything correctly. Try power cycling your microcontroller. Make sure the hardware programmer is unplugged. Make sure baud is configured properly on ALL of your hardware and ALL of your involved software. Make sure no other device is trying to use the same com port at the same time, such as AVR Studio, HyperTerminal, etc.

Some mistakes that you can make will cause your command prompt window to freeze up. Just open up a new window and try again. Some users have noticed that too many unused globabl variables in your source code will cause problems. See this forum post for more info. And a side note . . . this bootloader can only connect with com ports 1 to 4. The developer of the bootloader for some odd reason thought there isn't anything wrong with this decision . . . If you need a different port, go to the com port settings and change the port you are using. Also to note, some of your UART hardware might not be fast enough as the software doesn't wait for hardware to keep up. TheEasy Radio module will not work, for example. A direct serial/USB connection will work without a problem. PROGRAMMING - COMPUTER VISION TUTORIAL

Introduction to Computer Vision Computer vision is an immense subject, more than any single tutorial can cover. In the following tutorials I will cover the basics of computer vision in four parts, each focused on need-to-know practical knowledge. Part 1: Vision in Biology Part 1 will talk about vision in biology, such as the human eye, vision in insects, etc. By understanding how biology processes visual images, you may then be able to apply what you learned towards your own creations. This will help you turn the 'magic' into an understanding of how vision really works. Part 2: Computer Image Processing Part 2 will go into computer image processing. I will talk about how a camera captures an image, how it is stored in a computer, and how you can do basic alterations of an image. Basic machine vision tricks such as heuristics, thresholding, and greyscaling will be covered.

Part 3: Computer Vision Algorithms Part 3 covers the typical computer vision algorithms, where I talk about how to do some higher level processing of what your robot sees. Edge detection, blob counting, middle mass, image correlation, facial recognition, and stereo vision will be covered. Part 4: Computer Vision Algorithms for Motion Part 4 covers computer vision algorithms for motion. Motion detection, tracking, optical flow, background subtraction, and feature tracking will be explained. There is also a problem set to test you on what you have learned in this computer vision tutorial series. PROGRAMMING - COMPUTER VISION TUTORIAL Part 1: Vision in Biology

Vision in Biology So why vision in biology? What does biology have to do with robots? Well,biomimetics is the study of biology to aid in the design of new technology - such as robots. The purpose of this tutorial is so that you can understand how biology approaches the vision problem. As we progress through parts2,3, and4 you will start to draw parallels between how a robot can see the world and how you and I see the world. I will assume you have a basic understanding of biology, so I will try to build upon what you already know with a bottom->up approach, and hopefully not bore you with what you already know. The Eye The eye is stage one of the human vision system. Here is a diagram of the human eye:

Light first passes through the iris. The iris is what adjusts for the amount of light entering the eye - an auto-brightness adjuster. This is so no matter how much light the eye sees, it tries to adjust the eye to always gather a set amount. Note that if the light is still too bright, you will feel naturally compelled to cover your eyes with your hands. Light then passes to the lens, which is stretched and compressed by muscles to focus the image. This is similar to auto-focus on a digital camera. Notice how the lens inverts the image upside-down? With two eyes creates stereo vision, as they do not look in parallel straight lines. For example, look at your finger, then place your finger on your nose - see how you automatically become cross eyed? The angle of your eyes to each other generates ranging information which is then sent to your brain. Note: this however is not the only method the eyes use to generate range data. Cones and Rods The light then goes into contact with special neurons in the eye (cones for color androds for brightness) that convert light energy to chemical energy. This process is complicated, but the end result is neurons that fire in special patterns that are sent to the brain by way of the optical nerve. Cones and Rods are the biological versions of pixels. But unlike in a camera where each pixel is equal, this is not true for the human eye.

What the above chart shows is the number of rods and cones in the eye vs location in the eye. At the very center of the eye (fovea = 0) you will notice a huge number of cones, and zero rods. Further out from the center the number of cones sharply decrease, with a gradual increase in rods. What does this mean? It means only the center of your eye is capable of processing color - the information from the rods going to your brain is significantly higher! Note the section labeled optic disk. This is where the optic nerveattaches to your eye, leaving no space left for light receptors. It is also called your blind spot. Compound Eyes Compound eyes work in the same way the human eye above works. But instead of rods and cones being the pixels, each individual compound eye acts as a pixel. Unlike popular folk-lore, the insect doesnt actually see hundreds of images. Instead it is hundreds of pixels, combined.

An robot example of a compound eye would be getting a hundredphotoresistors and combining them into a matrix to form a single greyscale image.

What advantage does a compound eye have over a human eye? If you poke a human eye out, his ability to see (total pixels gathered) drops to 50%. If you poke an insect eye out, it will still have 99% visual capability. It can also simply regrow an eye. Optic Nerve 'Image Processing' Most people dont realize how jumbled the information from the human eye really is. The image is inverted from the lens, rods and cones are not equally distributed, and neither eye sees the exact same image! This is where the optic nerve comes into play. By reorganizing neurons physically, it can reassemble an image to something more useful.

Notice how the criss-crossing reorganizes the information from the eyes - that which is seen on the left is processed in the right brain, and that which is seen on the right is processed in the left brain. The problem of two eyes seeing two different images is partially solved. Also interesting to note, there are significantly fewer neurons in the optic nerve then there are cones and rods in the eye. Theory goes that there is summing and averaging going on of 'pixels' that are in close proximity in the eye. What happens after this is still unknown to science, but significant progress has been made. Brain Processing This is where your brain 'magically' assembles the image into something comprehendable. Although the details are fuzzy, it has been determined that different parts of your brain process different parts of the image. One part may process color, another part detecting motion, yet another determining shape. This should give you clues to how to program such a system, in that everything can be treated as seperate subsystems/algorithms.

And yet more Brain Processing . . . All of the basic visual information is gathered, and then processed again into yet a higher level. This is where the brain asks, what is it do I really see? Again, science has not entirely solved this problem (yet), but we have really good theories on what probably happens. Supposedly the brain keeps a large database of reference information - such as what a mac-n-cheese dinner looks like. The brain 'observes' something, then goes through the reference library to make conclusions on what is observed.

How could this happen? Well, the brain knows the color should be orange, it knows it should have a shiny texture, and that the shape should be tube-like. Somehow the brain makes this connection, and tells you 'this is mac-n-cheese, yo.' Your other senses work in a similar manner. More specifically, the theory is about pattern recognition . . . its sorta like me showing you an ink blot, then asking you 'what do you see?' Your brain will try and figure it out, despite the fact it doesnt actually represent anything. Its a subconscious effort.

Your brain also uses its understanding of the physical world (how things connect together in 3D space) to understand what it sees. Dont believe me? Then tell me how many legs this elephant has.

I highly recommend doing agoogle search on optical illusions. This is when the image processing rules of the brain 'break,' and is often used by scientists to figure out how we understand what we see. Stereo Image Processing What has baffled scientists for the longest time, and only recently solved (in my opinion), is what allows us to see a 2D image and yet picture it in 3D. Look at a painting of a scene, and you can immediately determine a fairly accurate measurement and distance away of every object in the picture.Scientists at CMUhave recently solved how a computer can accomplish this. Basically a computer keeps a huge index of about a 1000 or so images, each with range data assigned (trained) to it. Then by probability analysis, it can make connections with future images that need to be processed. Here are examples of figuring out 3D from 2D. ALL lines that are parallel in 3D converge in 2D. This is a picture of a traintrack. Notice how the parellel lines converge to a single point? This is a method the brain uses to guestimate range data.

The brain uses the relation of objects located on the 2D ground to determine 3D scenes. Here is a picture of a forest. By looking at where the trees are located on the ground, you can quickly figure out how far away the trees are located from each other. What tree is closest to the photographer? Why? How do you program that as an algorithm?

If I removed the ground reference, what then would you rely on to figure out how far each tree is from each other? The next method would probably be size comparisons. You would assume trees that are located closer would appear larger.

But this wouldnt work if you had a giant tree far away and a tiny tree close up - as both would appear the same size! So the brain has yet many more methods, such as comparisons ofdetails (size of leaves, for example), shading and shadows, etc. The below image is just a circle, but appears as a sphere because of shading. An algorithm that can process shading can convert 2D images to 3D.

Now that you understand the basics of biological vision processing in ourComputer Vision Tutorial Series, you may continue on to Part 2: Computer Image Processing. PROGRAMMING - COMPUTER VISION TUTORIAL Part 2: Computer Image Processing Pixels and Resolution 2D Matrices Decreasing Resolution Thresholding and Heuristics Image Color Inversion Image Brightness / Darkness Addendum (1D -> 4D) Computer Image Processing In part 2 of the Computer Vision Tutorial Serieswe will talk about how images are stored in a computer, as well as basic image manipulation algorithms. Mona Lisa (original image above) will be our guiding example throughout this tutorial. Image Collection The very first step would be to capture an image. A camera captures data as a stream of information, reading from a single light receptor at a time and storing each complete 'scan' as one single file. Different cameras can work differently, so check the manual on how it sends out image data. There are two main types of cameras, CCD and CMOS.

A CCD transports the charge across the chip and reads it at one corner of the array. An analog-to-digital converter (ADC) then turns each pixel's value into a digital value by measuring the amount of charge at each photosite and converting that measurement to binary form. CMOS devices use several transistors at each pixel to amplify and move the charge using more traditional wires. The CMOS signal is digital, so it needs no ADC. CCD sensors create high-quality, low-noise images. CMOS sensors are generally more susceptible to noise. Because each pixel on a CMOS sensor has several transistors located next to it, the light sensitivity of a CMOS chip is lower. Many of the photons hit the transistors instead of the photodiode. CMOS sensors traditionally consume little power. CCDs, on the other hand, use a process that consumes lots of power. CCDs consume as much as 100 times more power than an equivalent CMOS sensor. CCD sensors have been mass produced for a longer period of time, so they are more mature. They tend to have higher quality pixels, and more of them. Below is how colored pixels are arranged on a CCD chip:

When storing or processing an image, make sure the image is uncompressed - meaning don't use JPG's . . . BMP's, GIF's, and PNG's are often (although not always) uncompressed. If you decide to transmit an image as compressed data (for faster

transmission speed), you will have to uncompress the image before processing. This is important with how the file is understood . . .

Pixels and Resolution In every image you have pixels. These are the tiny little dots of color you see on your screen, and the smallest possible size any image can get. When an image is stored, the image file contains information on every single pixel in that image. This information includes two things: color, and pixel location. Images also have a set number of pixels per size of the image, known as resolution. You might see terms such as dpi (dots per square inch), meaning the number of pixels you will see in a square inch of the image. A higher resolution means there are more pixels in a set area, resulting in a higher quality image. The disadvantage of higher resolution is that it requires more processing power to analyze an image. When programming computer vision into a robot, use low resolution.

The Matrix (the math kind) Images are stored in 2D matrices, which represent the locations of all pixels. All images have an X component, and a Y component. At each point, a color value is stored. If the image is black and white (binary), either a 1 or a 0 will be stored at each location. If the color is greyscale, it will store a range of values. If it is a color image (RBG), it will store sets of values. Obviously, the less color involved, the faster the image can be processed. For many applications, binary images can acheive most of what you want. Here is a matrix example of a binary image of a triangle:
0 0 0 1 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 1 0 0 0 0 1 0

It has a resolution of 7 x 5, with a single bit stored in each location. Memory required is therefore 7 x 5 x 1 = 35 bits. Here is a matrix example of a greyscale (8 bit) image of a triangle:
0 0 55 255 55 0 0 55 255 255 55 0 55 255 55 255 55 0 255 55 55 255 55 0 55 255 55 255 55 0 0 55 255 255 55 0 0 0 55 255 55 0

It has a resolution of 7 x 6, with 8 bits stored in each location. Memory required is therefore 7 x 6 x 8 = 336 bits. As you can see, increasing resolution and information per pixel can significantly slow down your image processing speed. After converting color data to generate greyscale, Mona Lisa looks like this:

Decreasing Resolution The very first operation I will show you is how to decrease the resolution of an image. The basic concept in decreasing resolution is that you are selectively deleting data from the image. There are several ways you can do this: The first method is just delete 1 pixel out of every group of pixels in both X and Y directions of the matrix. For example, using our greyscale image of a triangle above, and deleting one out of every two pixels in the X direction, we would get:
0 0 55 255 55 0 55 255 55 255 55 0 55 255 55 255 55 0 0 0 55 255 55 0

and continuing with the Y direction:


0 55 55 55 55 55 55 55 55 0 55 55

and will result in a 4 x 3 matrix, for memory usage of 96 bits.

Another way of decreasing resolution would be to choose a pixel, average the values of all surrounding pixels, store that value in the choosen pixel location, then delete all the surrounding pixels. For example,
13 112 112 13 145 166 166 145 103 103 103 103

Using the latter method for resolution reduction, this is what Mona Lisa would look like (below). You can see how pixels are averaged along the edges of her hair.

Thresholding and Heuristics While the above method reduces image file size by resolution reduction, thresholding reduces file size by reducing color data in each pixel. To do this, you first need to analyze your image by using a method called heuristics. Heuristics is when you statistically look at an image as a whole, such as determining the overall brightness of an image, or counting the total number of pixels that contain a certain color. For an example histogram, here is my sample greyscale pixel histogram of Mona Lisa, and sample histogram generation code. An example image heuristic plotting pixel count (Y-axis) versus pixel color intensity (0 to 255, X-axis):

Often heuristics is used for improving image contrast. The image is analyzed, and then bright pixels is made brighter, and dark pixels is made darker. Im not going to go into contrast details here as it is a little complicated, but this is what an improved contrast of Mona Lisa would look like (before and after):

In this particular thresholding example, we will convert all colors to binary. How do you decide which pixel is a 1 and which is a 0? The first thing you do is determine a threshold - all pixel values above the threshold becomes a 1, and all below becomes a 0. Your threshold can be chosen arbitrarily, or it can be based on your heuristic analysis. For example, converting our greyscale triangle to binary, using 40 as our threshold, we will get:
0 0 1 1 1 0 0 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 1 0

If the threshold was 100, we would get this better image:


0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0

As you can see, setting a good threshold is very important. In the first example, you cannot see the triangle, yet in the second you can. Poor thresholds result in poor images. In the following example, I used heuristics to determine the average pixel value (add all pixels together, and then divide by the total number of pixels in the image). I then set this average as the threshold. Setting this threshold for Mona Lisa, we get this binary image:

Note that if the threshold was 1, the entire image would be black. If the threshold was 255, the entire image would be white. Thresholding really excels when the background colors are very different from the target colors, as this automatically removes the distracting background from your image. If your target is the color red, and there is little to no red in the background, your robot can easily locate any object that is red by simply thresholding the red value of the image.

Image Color Inversion Color image inversion is a simple equation that inverts the colors of the image. I havnt found any use for this on a robot, but it does however make a good example . . . The greyscale equation is simply: 255 - pixel_value = new_pixel_value The greyscale triangle then becomes:
255 255 200 0 200 255 255 200 0 0 200 255 200 0 200 0 200 255 0 200 200 0 200 255 200 0 200 0 200 255 255 200 0 0 200 255 255 255 200 0 200 255

An RBG of Mona Lisa becomes:

Brightness (and Darkness) Increasing brightness is another simple algorithm. All you do is add (or subtract) some arbitrary value to each pixel: new_pixel_value = pixel_value + 10 You must also make sure that no pixel goes above an exceeded value. With 8 bit greyscale, no value can exceed 255. A simple check can be added like this: if (pixel_value + 10 > 255) { new_pixel_value = 255; } else { new_pixel_value = pixel_value + 10; } And for our lovely and now radiant Mona Lisa:

The problem with increasing brightness too much is that it will result in whiteout. For example, if your arbitrarily added value was 255, every pixel would be white. It also does not improve a robot's ability to understand an image, so you probably will not find a use for this algorithm directly.

Addendum: 1D, 2D, 3D, 4D A 1D image can be obtained from use of a 1 pixel sensor, such as a photoresistor. As metioned in part 1 of this vision tutorial, if you put several photoresistors together, you can generate an image matrix. You can also generate a 2D image matrix by scanning a 1 pixel sensor, such as with a scanning Sharp IR. If you use a ranging sensor, you can easily store 3D info into a much more easily processed 2D matrix. 4D images include time data. They are actually stored as a set of 2D matrix images, with each pixel containing range data, and a new 2D matrix being stored after every X seconds of time passing. This makes processing simple, as you can just analyze each 2D matrix seperately, and then compare images to process change in time. This is just like film of a movie, which is actually just a set of 2D images changing so fast it appears to be moving. This is also quite similar to how a human processes temporal information, as we see about 25 images per second - each processed individually. Actually, biologically, its a bit more complicated than this. Feel free to read an email I recieved from Mr Bill concerning biological fps. But for all intents and purposes, 25fps is an appropriate benchmark.

Now that you understand the basics of computer image processing in ourComputer Vision Tutorial Series, you may continue on to Part 3: Computer Vision Algorithms (coming soon!). PROGRAMMING - COMPUTER VISION TUTORIAL Part 3: Computer Vision Algorithms Edge Detection Shape Detection Middle Mass and Blobs Pixel Classification

Image Correlation Facial Recognition Stereo Vision Now that you have learned about biological visionand computer image processing, we now continue on to the basic algorithms of computer vision. Computer Vision vs Machine Vision Computer vision and machine vision differ in how images are created and processed. Computer vision is done with everyday real world video and photography. Machine vision is done in oversimplified situations as to significantly increase reliability while decreasing cost of equipment and complexity of algorithms. As such, machine vision is used for robots in factories, while computer vision is more appropriate for robots that operate in human environments. Machine vision is more rudimentary yet more practical, while computer vision relates to AI. There is a lesson in this . . .

Edge Detection Edge detection is a technique to locate the edges of objects in the scene. This can be useful for locating the horizon, the corner of an object, white line following, or for determing the shape of an object. The algorithm is quite simple: sort through the image matrix pixel by pixel for each pixel, analyze each of the 8 pixels surrounding it record the value of the darkest pixel, and the lightest pixel if (darkest_pixel_value - lightest_pixel_value) > threshold) then rewrite that pixel as 1; else rewrite that pixel as 0; What the algorithm does is detect sudden changes in color or lighting, representing the edge of an object. Check out the edges on Mona Lisa:

A challenge you may have is choosing a good threshold. This left image has a threshold thats too low, and the right image has a threshold thats too high. You will need to run an image heuristics programfor it to work properly.

You can also do other neat tricks with images, such as thresholding only a particular color like red.

Shape Detection and Pattern Recognition Shape detection requires preprogramming in a mathematical representation database of the shapes you wish to detect. For example, suppose you are writing a program that can distinguish between a triangle, a square, and a circle. This is how you would do it: run edge detection to find the border line of each shape count the number of continuous edges a sharp change in line direction signifies a different line do this by determining the average vector between adjacent pixels

if three lines detected, then its a triangle if four lines, then a square if one line, then its a circle by measure angles between lines you can determine more info (rhomboid, equilateral triangle, etc.)

The basic shapes are very easy, but as you get into more complex shapes (pattern recognition) you will have to use probability analysis. For example, suppose your algorithm needed to recognize between 10 different fruits (only by shape) such as an apple, an orange, a pear, a cherry, etc. How would you do it? Well all are circular, but none perfectly circular. And not all apples look the same, either. By using probability, you can run an analysis that says 'oh, this fruit fits 90% of the characteristics of an apple, but only 60% the characteristics of an orange, so its more likely an apple.' Its the computational version of an 'educated guess.' You could also say 'if this particular feature is present, then it has a 20% higher probability of being an apple.' The feature could be a stem such as on an apple, fuzziness like on a coconut, or spikes like on a pinneapple, etc. This method is known as feature detection.

Middle Mass and Blob Detection Blob detection is an algorithm used to determine if a group of connecting pixels are related to each other. This is useful for identifying seperate objects in a scene, or counting the number of objects in a scene. Blob detection would be useful for counting people in an airport lobby, or fish passing by a camera. Middle mass would be useful for a baseball catching robot, or a line following robot.

To find a blob, you threshold the image by a specific color as shown below. The blue dot represents the middle mass, or the average location of all pixels of the selected color.

If there is only one blob in a scene, the middle mass is always located in the center of an object. But what if there were two or more blobs? This is where it fails, as the middle mass is no longer located on any object:

To solve for this problem, your algorithm needs to label each blob as seperate entities. To do this, run this algorithm: go through each pixel in the array: if the pixel is a blob color, label it '1' otherwise label it 0 go to the next pixel if it is also a blob color and if it is adjacent to blob 1 label it '1' else label it '2' (or more)

repeat until all pixels are done What the algorithm does is labels each blob by a number, counting up for every new blob it encounters. Then to find middle mass, you can just find it for each individual blob. In this below video, I ran a few algorithms in tandem. First, I removed all non-red objects. Next, I blurred the video a bit to make blobs more connected. Then, using blob detection, I only kept the blob that had the most pixels (the largest red object). This removed background objects such as the fire extinguisher. Lastly, I did center of mass to track the actual location of the object. I also ran a population threshold algorithm that made the object edges really sharp. It doesnt improve the algorithm in this case, but it does make it look nicer as a video. Feel free to download my custom blob detection RoboRealm file that I used. In this video, I programmed my ERP to do nothing but middle mass tracking:

Pixel Classification Pixel Classification is when you assign each pixel in an image to an object class. For example, all greenish pixels would be grass, all blueish pixels would be sky or water, all greyish pixels would be road, and all yellow would be a road lane divider. There are other ways to classify each pixel, but color is typically the easiest. This method is clearly useful for picking out the road for road following and obstacles for obstacle avoidance. Its also used in satellite image processing, such as this image of a city (yellow/red for buildings), forest (green), and river (blue):

If Greenpeace wanted to know how much forest has been cut down, a simple pixel density count can be done. To do this, simply count and compare the forest pixels from before and after the logging. A major benefit to this bottom-up method to image processing is its immunity to heavy image noise. Blobs do not need to be identified first. By finding the middle mass of these pixels, the center location of each object can be found. Need an algorithm to identify roads for your driving robot? This below video (from my house front door) is an example of me simply maximizing RBG (red blue green) colors. Pixels that are more blue than any other color become all blue, pixels more green than any other color become all green, and the same for red. What you get is the road being all blue, the grass being all green, and houses being red. Its not perfect, yet still works amazingly well for a simple pixel classification algorithm. This algorithm would well compliment another algorithm(s). Feel free to download my custom pixel classification RoboRealm file that I used.
<head><version>1.7.3.3</version></head> <Read_AVI> <loop_playback>1</loop_playback> <filename>C:\Documents and Settings\Pika\Desktop\snowpics\MOV03312 mpg.avi</filename> <running>TRUE</running> </Read_AVI> <Scale> <maintain_aspect>1</maintain_aspect> <percent_height>62</percent_height> <percent_width>62</percent_width> <pixel_width>400</pixel_width> <pixel_height>300</pixel_height> </Scale> <Max_RGB_Channel/> <RGB_Filter> <max_value>20</max_value> <min_value>113</min_value> <channel>4</channel> </RGB_Filter> <Write_AVI> <limit_time_type>-1</limit_time_type> <image_to_save>Current</image_to_save> <codec>Indeo? video 5.10</codec> <filename>C:\Documents and Settings\Pika\Desktop\snowpics\RBGmaxinga.avi</filename> <real_time>1</real_time> </Write_AVI>

Image Correlation (Template Matching) Image correlation is one of the many forms of template matching for simple object recognition. This method works by keeping a large database of various imaged features, and computing 'intensity similiarity' of an entire image or window with another. In this example, various features of an adorably cute squirrel (its the species name) are obtained for comparison with other objects.

This method is also used for feature detection (mentioned earlier) and facial recognition . ..

Facial Recognition Facial recognition is a more advanced type of pattern recognition. With shape recognition you only need a small database of mathematical representations of shapes. But while basic shapes like a triangle can be easily described, how do you mathematically represent a face?

Here is an excercise for you. Suppose you have a friend coming to your family's house and she/he wants to recognize every face by name before arriving. If you could only give

a written list of facial features of each family member, what would you say about each face? You might describe hair color, length, or style. Maybe your sister has a beard. One person might have a more rounded face, while another person might have a very thin face. For a family of 4 people this excercise is really easy. But what if you had to do it for everyone in your class? You might also analyze skin tone, eye color, wrinkles, mouth size . . . the list goes on. As the number of people that will be analyzed grows, so would the number of required descriptions for each face. One popular way of digitizing faces is to measure the distance between each eye, size of the head, distance between eyes and mouth, and length of mouth. By keeping a database of these values, surprisingly you can accurately identify thousands of different faces. Hint: notice how the features on Mona Lisa's face above is much easier to identify and locate after edge detection.

Unfortunately for law enforcement this method does not work outside of the lab. This is because it requires facial images that are really close and clear for the measurements to be done accurately. It is also difficult to control which way a person is looking, too. For example, can you make out the facial measurements of the man in this security cam image?

Have a look at this below image. Despite these pictures also being tiny and blurry, you can somehow recognize many of them! The human brain obviously has other yet undiscovered methods of facial recognition . . .

Stereo Vision Stereo vision is a method of determing the 3D location of objects in a scene by comparing images of two seperate cameras. Now suppose you have some robot on Mars and he sees an alien (at point P(X,Y)) with two video cameras. Where does the robot need to drive to run over this alien (for 20 kill points)?

First lets analyze the robot camera itself. Although a simplification resulting in minor error, the pinhole camera model will be used in the following examples:

The image plane is where the photo-receptors are located in the camera, and the lensis the lens of the camera. The focal distance is the distance between the lens and the photoreceptors (can be found in the camera datasheet). Point P is the location of the alien, and point p is where the alien appears on the photo-receptors. The optical axis is the direction the camera is pointing. Redrawing the diagram to make it mathematically simpler to understand, we get this new diagram

with the following equations for a single camera: x_camL = focal_length * X_actual / Z_actual y_camL = focal_length * Y_actual / Z_actual CASE 1: Parallel Cameras Now moving on to two parallel facing cameras (L for left camera and R for right camera), we have this diagram:

The Z-axis is the optical axis (the direction the cameras are pointing). b is the distance between cameras, while f is still the focal length. The equations of stereo triangulation (because it looks like a triangle) are: Z_actual = (b * focal_length) / (x_camL - x_camR) X_actual = x_camL * Z_actual / focal_length Y_actual = y_camL * Z_actual / focal_length

CASE 2a: Non-Parallel Cameras, Rotation About Y-axis And lastly, what if the cameras are pointing in different non-parallel directions? In this below diagram, the Z-axis is the optical axis for the left camera, while the Zo-axis is the optical axis of the right camera. Both cameras lie on the XZ plane, but the right camera is rotated by some angle phi. The point where both optical axes (plural for axis, pronounced ACKS - I) intersect at the point (0,0,Zo) is called the fixation point. Note that the fixation point could also be behind the cameras when Zo < 0.

calculating for the alien location . . . Zo = b / tan(phi) Z_actual = (b * focal_length) / (x_camL - x_camR + focal_length * b / Zo) X_actual = x_camL * Z_actual / focal_length Y_actual = y_camL * Z_actual / focal_length CASE 2b: Non-Parallel Cameras, Rotation About X-axis calculating for the alien location . . . Z_actual = (b * focal_length) / (x1 - x2) X_actual = x_camL * Z_actual / focal_length Y_actual = y_camL * Z_actual / focal_length + tan(phi) * Z CASE 2c: Non-Parallel Cameras, Rotation About Z-axis For simplicity, rotation around the optical axis is usually dealt with by rotating the image before applying matching and triangulation. Given the translation vector T and rotation matrix R describing the transormation from left camera to right camera coordinates, the equation to solve for stereo triangulation is: p' = RT ( p - T )

where p and p' are the coordinates of P in the left and right camera coordinates respectively, and RT is the transpose (or the inverse) matrix of R. Please continue on in the Computer Vision Tutorial Seriesfor Part 4: Computer Vision Algorithms for Motion. PROGRAMMING - COMPUTER VISION TUTORIAL Part 4: Computer Vision Algorithms for Motion Motion Detection Tracking Optical Flow Background Subtraction Feature Tracking Practice Problems Download Software

In part 4 of the Computer Vision Tutorial Serieswe will continue with computer vision algorithms for motion.

Motion Detection (Bulk Motion) Motion detection works on the basis of frame differencing - meaning comparing how pixels (usuallyblobs) change location after each frame. There are two ways you can do motion detection. The first method just looks for a bulk change in the image: calculate the average of a selected color in frame 1 wait X seconds calculate the average of a selected color in frame 2 if (abs(avg_frame_1 - avg_frame_2) > threshold) then motion detected The other method looks at the motion of the middle mass: calculate the middle mass in frame 1 wait X seconds

calculate the middle mass in frame 2 if (mm_frame_1 - mm_frame_2) > threshold) then motion detected The problem with these motion detection methods is that neither detects very slow moving objects, determined by the sensitivity of the threshold. But if the threshold is too sensitive, it will detect things like shadows and changes in sunlight! The algorithm also cant handle a rotating object - an object that moves, but which has a middle mass that does not change location.

Tracking By doing motion detection by calculating the motion of the middle mass, you can run more advanced algorithms such as tracking. By doing vector math, and knowing the pixel to distance ratio, one may calculate the displacement, velocity, and acceleration of a movingblob.

Here is an example on how to calculate speed of a car: calculate the middle mass in frame 1 wait X seconds calculate the middle mass in frame 2 speed = (mm_frame_1 - mm_frame_2) * distance / per_pixel Problems with tracking:

The major issue with this algorithm is determining the distance to pixel ratio. If your camera is at an angle to the horizon (not looking overhead and pointing straight down), or your camera experiences the lens effect (all cameras do, to some extent), then you need to write a separate algorithm that maps this ratio for a given pixel located at X and Y position. The below image is an exagerated lens effect, with pixels further down the trail equaling a greater distance than the pixels closer to the camera.

This Mars Rover camera image is a good example of the lens effect:

Lens radial distortion can be modelled by the following equations: x_actual = xd * (1 + distortion_constant * (xd^2 + yd^2)) y_actual = yd * (1 + distortion_constant * (xd^2 + yd^2)) The variables xd and yd are the image coordinates of the distorted image. The distortion_constant is a constant depending on the distortion of the lens. This constant can either be determined experimentally, or from data sheets of the lens or camera. Cross over is the other major problem. This is when multiple objects cross over each other (ie one blob passes behind another blob) and the algorithm gets confused which

blob is which. For an example, here is a video showing the problem. Notice how the algorithm gets confused as the man goes behind the tree, or crosses over another tracked object? The algorithm must remember a decent number of features of each tracked object for crossovers to work. (video was taken from here)

Optical Flow This computer vision method completely ignores and has zero interest in identifying observed objects. It works by analyzing the bulk/individual motion of pixels. It is useful for tracking, 3D analysis, altitude measurement, and velocity measurement. This method has the advantage that it can work with low resolution cameras, while the more simple algorithms require minimal processing power. Optical flow is a vector field that shows the direction and magnitude of these intensity changes from one image to the other, as shown here:

Applications for Optical Flow Altitude Measurement (for constant speed) Ever notice when traveling by plane, the higher you are the slower the ground below you seems to move? For aeriel robots that have a known constant speed, by analyzing pixel velocity from a downward facing camera the altitude can be calculated. The slower the pixels travel, the higher the robot. A potential problem however is when your robot rotates in the air, but this can be accounted for by adding additional sensors like gyros and accelerometers.

Velocity Measurement (for constant altitude) For a robot that is traveling at some known altitude, by analyzing pixel velocity, the robot velocity can be calculated. This is the converse of the altitude measurement method. It is impossible to gather both altitude and velocity data simultaneously using only optical flow, so a second sensor (such as GPS or an altimeter) needs to be used. If however your robot was an RC car, the altitude is already known (probably an inch above the ground). Velocity can then be calculated using optical flow with no other sensors. Optical flow can be used to directly compute time to impact for missles. Optical flow also is a technique often used by insects to gauge flight speed and direction. Tracking Please see tracking above, and background subtraction below. The optical flow method of tracking combines both of those methods together. By removing the background, all that needs to be done is analyze the motion of the moving pixels. 3D Scene Analysis By analyzing motion of all pixels, it is possible to generate rough 3D measurements of the observed scene. For example, the below image of the subway train: the pixels on the far left are moving fast, and they are both converging and slowing down towards the center of the image. With this information, 3D information of the train can be calculated (including velocity of train, and angle of the track).

Problems with optical flow . . . Generally, optical flow corresponds to the motion field, but not always. For example, the motion field and optical flow of a rotating barber's pole are different:

Although it is only rotating about the z-axis, optical flow will say the red bars are moving upwards in the z-axis. Obviously, assumptions need to be made of the expected observed objects for this to work properly. Accounting for multiple objects gets really complicated . . . especially if they cross each other . . . And lastly, the equations get yet more complicated when you track not just linear motion of pixels, but rotational motion as well. With optical flow, how do you tell if the center point of this ferris wheel is connected to the outer half?

Background Subtraction Background subtraction is the method of removing pixels that do not move, focusing only on objects that do. The method works like this: capture two frames compare the pixel colors on each frame

if the colors are the same, replace with the color white else, keep the new pixel Here is an example of a guy moving with a static background. Some pixels did not appear to change when he moved, resulting in error:

The problem with this method as above is that if the object stops moving, then it becomes invisible. If my hand moves, but my body doesnt, all you see is a moving hand. There is also the chance that although something is moving, not all the individual pixels change color because the object is of a uniform color. To correct for this, this algorithm must be combined with other algorithms such as edge detection and blob finding, to make sure all pixels within a moving boundary arent discarded. There is one other form of background subtraction called blue-screening (or greenscreening, or chroma-key). What you do is physically replace the background with a solid color - a big green curtain (called a chroma-key) typically works best. Then the computer replaces all pixels of that color with pixels from another scene. This technique is commonly used for weather anchor people, and is why they never wear green ties =P

This blue-screening method is more a machine vision technique, as it will not work in everyday situations - only in studios with expert lighting.

Here is a video of my ERP that I made using chroma key. If you look carefully, you'll see various chroma key artifacts as I didn't put much effort into getting it perfect. I used Sony Vegas Movie Studioto make the video.

Feature Tracking A feature is a specific identified point in the image that a tracking algorithm can lock onto and follow through multiple frames. Often features are selected because they are bright/dark spots, edges or corners - depending on the particular tracking algorithm.Template matching is also quite common. What is important is that each feature represents a specific point on the surface of a real object. As a feature is tracked it becomes a series of two-dimensional coordinates that represent the position of the feature across a series of frames. This series is referred to as a track. Once tracks have been created they can be used immediately for 2D motion tracking, or then be used to calculate 3D information.

(for a realplayer streaming video example of feature tracking, click the image)

Visual Servoing Visual servoing is a method of using video data to determine position data of your robot. For example, your robot sees a door and wants to go through it. Visual servoing will allow the front of your robot to align itself with the door and pass through. If your robot wanted to pick something up, it can use visual servoing to move the arm to that location. To drive down a road, visual servoing would track the road with respect to the robots heading.

To do visual servoing, first you need to use the vision processing methods listed in this tutorial to locate the object. Then your robot needs to decide how to orient itself to reach that location using some type of PID loop - the error being the distance between where the robot wants to be, and where it sees it is. If you would like to learn more about robot arms for use in visual servoing, see myrobot arms tutorial. ROBOT ARM TUTORIAL Degrees of Freedom Robot Workspace Mobile Manipulators Force Calculations Forward Kinematics Inverse Kinematics Motion Planning Velocity Sensing End Effector Design

About this Robot Arm Tutorial The robot arm is probably the most mathematically complex robot you could ever build. As such, this tutorial can't tell you everything you need to know. Instead, I will cut to the chase and talk about the bare minimum you need to know to build an effective robot arm. Enjoy! To get you started, here is a video of a robot arm assignment I had when I took Robotic Manipulation back in college. My group programmed it to type the current time into the keyboard . . . (lesson learned, don't crash robot arms into your keyboard at full speed while testing in front of your professor) You might be also interested in a robot arm I built that can shuffle, cut, and deal playing cards.

Degrees of Freedom (DOF) The degrees of freedom, or DOF, is a very important term to understand. Each degree of freedom is a joint on the arm, a place where it can bend or rotate or translate. You can typically identify the number of degrees of freedom by the number of actuators on the robot arm. Now this is very important - when building a robot arm you want as few degrees of freedom allowed for your application!!! Why? Because each degree requires a motor, often anencoder, and exponentially complicated algorithms and cost. Denavit-Hartenberg (DH) Convention The Robot Arm Free Body Diagram (FBD) The Denavit-Hartenberg (DH) Convention is the accepted method of drawing robot arms in FBD's. There are only two motions a joint could make: translate and rotate. There are only three axes this could happen on: x, y, and z (out of plane). Below I will show a few robot arms, and then draw a FBD next to it, to demonstrate the DOF relationships and symbols. Note that I did not count the DOF on the gripper (otherwise known as the end effector). The gripper is often complex with multiple DOF, so for simplicity it is treated as separate in basic robot arm design. 4 DOF Robot Arm, three are out of plane:

3 DOF Robot Arm, with a translation joint:

5 DOF Robot Arm:

Notice between each DOF there is a linkage of some particular length. Sometimes a joint can have multiple DOF in the same location. An example would be the human shoulder. The shoulder actually has three coincident DOF. If you were to mathematically represent this, you would just say link length = 0.

Also note that a DOF has its limitations, known as the configuration space. Not all joints can swivel 360 degrees! A joint has some max angle restriction. For example, no human joint can rotate more than about 200 degrees. Limitations could be from wire wrapping, actuator capabilities, servo max angle, etc. It is a good idea to label each link length and joint max angle on the FBD.

(image credit: Roble.info)

Your robot arm can also be on a mobile base, adding additional DOF. If the wheeled robot can rotate, that is a rotation joint, if it can move forward, then that is a translational joint. This mobile manipulator robot is an example of a 1 DOF arm on a 2 DOF robot (3 DOF total).

Robot Workspace The robot workspace (sometimes known as reachable space) is all places that the end effector (gripper) can reach. The workspace is dependent on the DOF angle/translation limitations, the arm link lengths, the angle at which something must be picked up at, etc. The workspace is highly dependent on the robot configuration. Since there are many possible configurations for your robot arm, from now on we will only talk about the one shown below. I chose this 3 DOF configuration because it is simple, yet isnt limiting in ability.

Now lets assume that all joints rotate a maximum of 180 degrees, because most servo motorscannot exceed that amount. To determine the workspace, trace all locations that the end effector can reach as in the image below.

Now rotating that by the base joint another 180 degrees to get 3D, we have this workspace image. Remember that because it uses servos, all joints are limited to a max of 180 degrees. This creates a workspace of a shelled semi-sphere (its a shape because I said so).

If you change the link lengths you can get very different sizes of workspaces, but this would be the general shape. Any location outside of this space is a location the arm cant reach. If there are objects in the way of the arm, the workspace can get even more complicated. Here are a few more robot workspace examples:

Cartesian Gantry Robot Arm

Cylindrical Robot Arm

Spherical Robot Arm

Scara Robot Arm

Articulated Robot Arm

Mobile Manipulators A moving robot with a robot arm is a sub-class of robotic arms. They work just like other robotic arms, but the DOF of the vehicle is added to the DOF of the arm. If say you have a differential driverobot (2 DOF) with a robot arm (5 DOF) attached (see yellow robot below), that would give the robot arm a total sum of 7 DOF. What do you think the workspace on this type of robot would be?

Force Calculations of Joints This is where this tutorial starts getting heavy with math. Before even continuing, I strongly recommend you read the mechanical engineering tutorials forstatics and dynamics. This will give you a fundamental understanding of moment arm calculations. The point of doing force calculations is for motor selection. You must make sure that the motor you choose can not only support the weight of the robot arm, but also what the robot arm will carry (the blue ball in the image below). The first step is to label your FBD, with the robot arm stretched out to its maximum length.

Choose these parameters:


o o o o

weight of each linkage weight of each joint weight of object to lift length of each linkage

Next you do a moment arm calculation, multiplying downward force times the linkage lengths. This calculation must be done for each lifting actuator. This particular design has just two DOF that requires lifting, and the center of mass of each linkage is assumed to be Length/2. Torque About Joint 1: M1 = L1/2 * W1 + L1 * W4 + (L1 + L2/2) * W2 + (L1 + L3) * W3 Torque About Joint 2: M2 = L2/2 * W2 + L3 * W3 As you can see, for each DOF you add the math gets more complicated, and the joint weights get heavier. You will also see that shorter arm lengths allow for smaller torque requirements.

Too lazy to calculate forces and torques yourself? Try my robot arm calculator to do the math for you.

Forward Kinematics Forward kinematics is the method for determining the orientation and position of the end effector, given the joint angles and link lengths of the robot arm. To calculate forward kinematics, all you need is highschool trig and algebra.

For our robot arm example, here we calculate end effector location with given joint angles and link lengths. To make visualization easier for you, I drew blue triangles and labeled the angles.

Assume that the base is located at x=0 and y=0. The first step would be to locate x and y of each joint. Joint 0 (with x and y at base equaling 0): x0 = 0 y0 = L0 Joint 1 (with x and y at J1 equaling 0): cos(psi) = x1/L1 => x1 = L1*cos(psi) sin(psi) = y1/L1 => y1 = L1*sin(psi) Joint 2 (with x and y at J2 equaling 0): sin(theta) = x2/L2 => x2 = L2*sin(theta) cos(theta) = y2/L2 => y2 = L2*cos(theta) End Effector Location (make sure your signs are correct): x0 + x1 + x2, or 0 + L1*cos(psi) + L2*sin(theta) y0 + y1 + y2, or L0 + L1*sin(psi) + L2*cos(theta) z equals alpha, in cylindrical coordinates The angle of the end effector, in this example, is equal to theta + psi. Too lazy to calculate forward kinematics yourself? Check out my Robot Arm Designer v1 in excel.

Inverse Kinematics Inverse kinematics is the opposite of forward kinematics. This is when you have a desired end effector position, but need to know the joint angles required to achieve it. The robot sees a kitten and wants to grab it, what angles should each joint go to? Although way more useful than forward kinematics, this calculation is much more complicated too. As such, I will not show you how to derive the equation based on your robot arm configuration. Instead, I will just give you the equations for our specific robot design: psi = arccos((x^2 + y^2 - L1^2 - L2^2) / (2 * L1 * L2)) theta = arcsin((y * (L1 + L2 * c2) - x * L2 * s2) / (x^2 + y^2)) where c2 = (x^2 + y^2 - L1^2 - L2^2) / (2 * L1 * L2); and s2 = sqrt(1 - c2^2); So what makes inverse kinematics so hard? Well, other than the fact that it involvesnonlinear simultaneous equations, there are other reasons too. First, there is the very likely possibility of multiple, sometimes infinite, number of solutions (as shown below). How would your arm choose which is optimal, based on torques, previous arm position, gripping angle, etc.?

There is the possibility of zero solutions. Maybe the location is outside the workspace, or maybe the point within the workspace must be gripped at an impossible angle. Singularities, a place of infinite acceleration, can blow up equations and/or leave motors lagging behind (motors cant achieve infinite acceleration). And lastly, exponential equations take forever to calculate on a microcontroller. No point in having advanced equations on a processor that cant keep up. Too lazy to calculate inverse kinematics yourself? Check out my Robot Arm Designer v1 in excel.

Motion Planning Motion planning on a robot arm is fairly complex so I will just give you the basics.

Suppose your robot arm has objects within its workspace, how does the arm move through the workspace to reach a certain point? To do this, assume your robot arm is just a simple mobile robot navigating in 3D space. The end effector will traverse the space just like a mobile robot, except now it must also make sure the other joints and links do not collide with anything too. This is extremely difficult to do . . . What if you want your robot end effector to draw straight lines with a pencil? Getting it to go from point A to point B in a straight line is relatively simple to solve. What your robot should do, by using inverse kinematics, is go to many points between point A and point B. The final motion will come out as a smooth straight line. You can not only do this method with straight lines, but curved ones too. On expensive professional robotic arms all you need to do is program two points, and tell the robot how to go between the two points (straight line, fast as possible, etc.). For further reading, you could use the wavefront algorithm to plan this two point trajectory.

Velocity (and more Motion Planning) Calculating end effector velocity is mathematically complex, so I will go only into the basics. The simplest way to do it is assume your robot arm (held straight out) is a rotating wheel of L diameter. The joint rotates at Y rpm, so therefore the velocity is Velocity of end effector on straight arm = 2 * pi * radius * rpm However the end effector does not just rotate about the base, but can go in many directions. The end effector can follow a straight line, or curve, etc. With robot arms, the quickest way between two points is often not a straight line. If two joints have two different motors, or carry different loads, then max velocity can vary between them. When you tell the end effector to go from one point to the next, you have

two decisions. Have it follow a straight line between both points, or tell all the joints to go as fast as possible - leaving the end effector to possibly swing wildly between those points. In the image below the end effector of the robot arm is moving from the blue point to the red point. In the top example, the end effector travels a straight line. This is the only possible motion this arm can perform to travel a straight line. In the bottom example, the arm is told to get to the red point as fast as possible. Given many different trajectories, the arm goes the method that allows the joints to rotate the fastest.

Which method is better? There are many deciding factors. Usually you want straight lines when the object the arm moves is really heavy, as it requires the momentum change for movement (momentum = mass * velocity). But for maximum speed (perhaps the arm isn't carrying anything, or just light objects) you would want maximum joint speeds. Now suppose you want your robot arm to operate at a certain rotational velocity, how much torque would a joint need? First, lets go back to our FBD:

Now lets suppose you want joint J0 to rotate 180 degrees in under 2 seconds, what torque does the J0 motor need? Well, J0 is not affected by gravity, so all we need to consider is momentum and inertia. Putting this in equation form we get this:

torque = moment_of_inertia * angular_acceleration breaking that equation into sub components we get: torque = (mass * distance^2) * (change_in_angular_velocity / change_in_time) and change_in_angular_velocity = (angular_velocity1)-(angular_velocity0) angular_velocity = change_in_angle / change_in_time Now assuming at start time 0 that angular_velocity0 is zero, we get torque = (mass * distance^2) * (angular_velocity / change_in_time) where distance is defined as the distance from the rotation axis to the center of mass of the arm: center of mass of the arm = distance = 1/2 * (arm_length) (use arm mass) but you also need to account for the object your arm holds: center of mass of the object = distance = arm_length (use object mass) So then calculate torque for both the arm and then again for the object, then add the two torques together for the total: torque(of_object) + torque(of_arm) = torque(for_motor) And of course, if J0 was additionally affected by gravity, add the torque required to lift the armto the torque required to reach the velocity you need. To avoid doing this by hand, just use the robot arm calculator. But it gets harder . . . the above equation is for rotational motion and not for straight line motions. Look up something called a Jacobian if you enjoy mathematical pain =P Another Video! In order to better understand robot arm dynamics, we had a robot arm bowling competition using the same DENSO 6DOF robot arms as in the clocks video. Each team programs an arm to do two tasks:

o o

Try to place all three of its pegs in the opponents' goal Block opponent pegs from going in your own goal

Enjoy! (notice the different arm trajectories) Arm Sagging Arm sagging is a common affliction of badly designed robot arms. This is when an arm is too long and heavy, bending when outwardly stretched. When designing your arm, make sure the arm is reinforced and lightweight. Do a finite element analysis to determine bending deflection/stress such as I did on my ERP robot:

Keep the heaviest components, such as motors, as close to the robot arm base as possible.It might be a good idea for the middle arm joint to be chain/belt driven by a motor located at the base (to keep the heavy motor on the base and off the arm). The sagging problem is even worse when the arm wobbles between stop-start motions. The solve this, implement a PID controller so as to slow the arm down before it makes a full stop.

Sensing

Most robot arms only have internal sensors, such as encoders. But for good reasons you may want to add additional sensors, such as video, touch, haptic, etc. A robot arm without video sensing is like an artist painting with his eyes closed. Using basic visual feedback algorithms, a robot arm could go from point to point on its own without a list of preprogrammed positions. Giving the arm a red ball, it could actually reach for it (visual tracking and servoing). If the arm can locate a position in X-Y space of an image, it could then direct the end effector to go to that same X-Y location (by using inverse kinematics). If you are interested in learning more about the vision aspect of visual servoing, please read the Computer Vision Tutorials for more information.

Haptic sensing is a little different in that there is a human in the loop. The human controls the robot arm movements remotely. This could be done by wearing a special glove, or by operating a miniature model with position sensors. Robotic arms for amputees are doing a form of haptic sensing. Also to note, some robot arms have feed back sensors (such as touch) that gets directed back to the human (vibrating the glove, locking model joints, etc.).

Tactile sensing (sensing by touch) usually involves force feedback sensors andcurrent sensors. These sensors detect collisions by detecting unexpected force/current spikes,

meaning a collision has occurred. A robot end effector can detect a successful grasp, and not grasp too tight or too lightly, just by measuring force. Another method would be to use current limiters - sudden large current draws generally mean a collision/contact has occurred. An arm could also adjust end effector velocity by knowing if it is carrying a heavy object or a light object - perhaps even identify the object by its weight.

Try this. Close your eyes, and put both of your hands in your lap. Now keeping your eyes closed, move your handslowly to reach for your computer mouse. Do it!!!! You will see why soon . . . Now what will happen is that your hand will partially miss, but at least one of your fingers will touch the mouse. After that finger touches, your hand will suddenly re-adjust its position because it now knows exactly where that mouse is. This is the benefit of tactile sensing - no precision encoders required for perfect contact!

End Effector Design In the future I will write a separate tutorial on how to design robot grippers, as it will require many more pages of material. In the meantime, you might be interested in reading the tutorial for calculating friction and force for robot end effectors. I also went in to some detail describing my robot arm card dealing gripper. Anyway, I hope you have enjoyed this robot arm tutorial!

Practice What You Learned These three below images are made from sonar capable of generating a 2D mapped field of an underwater scene with fish (for fisheries counting). Since the data is stored in a similar way to data from a camera, vision algorithms can be applied.

(scene 1, scene 2, and scene 3) So here is your challenge: What two different algorithms can acheive the change from scene 1 to scene 2 (hint: scene 2 only shows moving fish)? Name the algorithm that can acheive the change from scene 2 to scene 3 (hint: color is made binary)? What algorithm allows finding the location of the fish in the scene? If in scene two we were to identify the types of fish, what three different algorithms might work?

answers are at the bottom of this page

Downloadable Software (not affiliated with SoR) For those interested in vision code for the hacking, here is a great source forcomputer vision source code. To quickly get started with computer vision processing, tryRoboRealm. Its simple GUI interface allows you to do histograms, edge detection, filtering, blob detection, matching, feature tracking, thresholding, transforms and morphs, coloring, and a few others.
answers: background subtraction and optical flow; blob detection; middle mass; image correlation, shape detection and pattern recognition, and facial recognition techniques

PROGRAMMING - DATA LOGGING TUTORIAL

Data Logging Data logging is the method of recording sensor measurements over a period of time. Typically in robotics you will not need a datalogger. But there are times when you may need to analyze a complex situation, process large amounts of data, diagnose an error, or perhaps need an automated way to run an experiment. For example, you can use a data logger to measureforce and torque sensors, perform current or power use measurements, or just record data for future analysis.
ROBOT FORCE AND TORQUE SENSORS

(images: left 6 detect force, right 5 detect torque) Theory Capacity

Strain Gauge Wheatstone Bridge Costs

Damage Installation Cables

Force Sensors (Force Transducers) There are many types of force sensors, usually referred to as torque cells (to measure torque) and load cells (to measure force). From this point on I will refer to them as 'force transducers.' Force transducers are devices useful in directly measuring torques and forces within your mechanical system. In order to get the most benefit from a force transducer, you must have a basic understanding of the technology, construction, and operation of this unique device.

Digital Load Cell Cutaway

Theory of Measuring Forces There are many reasons why you would need to directly measure forces for your robot. Parameter optimization, force quantitization, and weight measurement are a few. You may want to put force transducers on your bipedal robot to know how much weight is on each leg at any point in time. You may want to put force transducers in your robot grippers to control gripper friction - so as to not crush or drop anything picked up. Or you could use it so that your robot knows it has reached its maximum carrying weight (or even determine how much weight it is carrying). First, I will talk about how a force transducer converts this force into a measurable electrical signal.

Strain Gauge The strain gauge is a tiny flat coil of conductive wire (ultra-thin heat-treated metallic foil chemically bonded to a thin dielectric layer blah blah blah) that changes its resistance when you bend it. The idea is to place the strain gauge on a beam (with a special adhesive), bend the beam, then measure the change in resistance to determine the strain. Note that strain is directly related to the force applied to bend the beam. Unfortunately strain gauges are somewhat expensive at about $10-20 each, usually coming in packs of 5-10 (so its like $50-$100). If you are willing to experiment, and your forces are small, you can also useconductive foam as a strain gauge. Compressing the foam lowers the electrical resistance. If you want more details, see this strain gauge tutorial. (please note that compression and tension are mislabled, and should be swapped in the below animation - sorry!)

Wheatstone Bridge The typical strain gauge has a VERY LOW change in resistance when bent. So to measure this change in resistance, several tricks are applied. There is a ton of theory on this so I wont go into how it works, but basically a neat circuit invented in the 1800's can be used to easily amplify this difference. These circuits are built into all load and torque

sensors so you do not need to be concerned with how they work, just how to use them. The strain gages inside the force transducer, usually a multiple of four, are connected into a Wheatstone bridge configuration in order to convert the very small change in resistance into a usable electrical signal. Passive components such as resistors and temperature dependent wires are used to compensate and calibrate the bridge output signal. Anyway, most force transducers have four wires coming out of them, so all you need to do is attach them as prescribed here:

Note that the wire colors are usually red, black, green, and white, and that some manufacturers for some lame reason use the red and black wires for signal and not for power. You will probably need to further amplify the signal a factor of another few thousand, but that can easily be done with a voltage difference amplifier. Your output will give you a negative voltage for one direction of force, and a positive voltage for the opposite direction. If you are measuring voltage with an oscilliscope or multimeter, this is easy to measure. But for a microcontroller, you cannot have any negative voltage output. A microcontroller can only read 0V to 5V. As a solution, use a 2.5V voltage regulator for ground of your force transducer, and a 7.5V ~ 8V voltage regulator for power to your force transducer. This will effectively shift the output voltage to 2.5V neutral output to your microcontroller. Your range should be between 0 and 5V. To limit your sensor within that range, experiment with your amplifier gain.

Costs Unfortunately force transducers are on the expensive side. Expect to spend between a few hundred to a few thousand dollars each. There are many different types of sensors, of different dimensions and capacities and qualities, from a large variety of companies. Check the ad window on the top right of this page for several force transducer companies. Know that some companies hire actual engineers for tech support, some don't. Actual conversation I once had: "I have some technical questions, are you an engineer?" "Ummm, I don't have a degree in engineering, if that is what you mean. But I think I can

help you." Surprisingly, some companies do not actually include a spec sheet (Certificate of Calibration) with their sensor, so you have no idea what the voltagetorque curve is! Insist on getting one, or expect to spend hourstesting and graphing when you get your sensor.

Don't make your choice on sensors based solely on price - cost of ownership is more important too. Maintenance costs, recalibration time, possibility of failure, etc should all be factors. As a side comment, there are ways to make your own force transducers if your too poor, but that is outside of the scope of this tutorial. So if you think buying a force transducer is for you, continue reading.

Capacity Selection Force overload is the primary reason for transducer failure, although the process of selecting the right force capacity looks easy and straight forward. There are several terms you must understand to properly select for load capacity: The measuring range is the range of values of mass for which the result of measurement is not affected by outer limit error. The safe load/torque limit is the maximum load/torque that can be applied without producing a permanent shift in the performance characteristics beyond specified. The safe side load is the maximum load that can act 90 degrees to the axis along which the transducer is designed to be loaded at without producing a permanent shift in the performance beyond specified. A force transducer will perform within specifications until the safe load/torque limit or safe side load limit is passed. Beyond this point, even for a very short period of time, the transducer will be permanently damaged. Capacity Selection, Derating Unfortunately you cannot just rate your transducer by

static forces alone. There are many additional issues you must be concerned about:
o o o o o o

Shock loading (sudden short term forces) Dynamic influences (momentum) Off centre distribution of force The possibility of an overload weight/torque Strain Gauge Fatigue (constant use and wear) Cable Entry Fatigue (the output wire bending a lot)

If there is a possibility that any of these may occur, you must then derateyour force sensor (use a higher capacity). For example, if you expect a high fatigue rate, you should multiply your required capacity by two. Make sure you understand what you are measuring so that you do not waste money on a soon-to-be-broken force transducer. Over time, you may want to recalibrate your sensor occasionally in case of long term fatigue damage.

Damage Because force transducers are expensive, preventing them from being damaged should be a high priority. There are many ways to damage a transducer. Shock, overloading, lightning strikes or heavy surges in current, chemical or moisture ingress, mishandling (dropping, pulling on cable, etc.), vibration, seismic events, or internal component malfunctioning to name a few. If your sensor becomes damaged, don't just re-calibrate it. Mechanical failure may have catastrophic effects and you will no longer have a reliable sensor.

Lightning "Investigations indicate that a lightning strike within a 900ft radius of the geometrical centre of the site will definitely have a detrimental effect on the weighbridge." In most

cases, the actual damage is a direct result of a potential difference (1000+ volts) between the sensor circuit and sensor housing. If lighting strikes commonly happen near your area, make all grounds on your circuit common so the voltage floats together - and use surge protectors! And of course, no electric welding should be done near your sensor (hey, it has happened).

Moisture Obviously, water and electronics do not mix. Force transducers are always sealed to keep out the elements, however moisture/condensation damage occurs from a slow seeping over a long period of time. The damage can be multiplied when acids or alkalines are present. The most likely entry area for moisture is at the cable entry point, so it is important to keep this area protected more than any other. Manufactures employ many techniques to seal it off, but there are additional techniques you the user can also employ. Know that often temperature changes can cause a pumping action to occur, pushing moisture down the inside of the cable. Entry also can be via a leaking junction box or through a damaged part of the cable. This can take some time to reach critical areas, but once there it will become sealed in place and do critical damage.

Corrosion The effects of corrosion on your force transducer will be the result of both the manufacturing quality and the environment in which the sensor is used. Make sure you understand how likely your choice in transducer is likely to corrode over time. Consider the metal type of the outer casing, the surface finish, the weld areas, thickness/quality of moisture seals, and cable material (PVC, PUR, or teflon). Also understand the environment - salt water, for example, has different corrosion effects depending on the local circumstances. Stainless steel in stagnant salt water is subject to crevice corrosion (a regular wash down is necessary to avoid degradation). Don't assume stainless steel means "no corrosion, no problem and no maintenance". In certain applications, painted or plated

load cells may offer better long-term protection. An alternative is wrap-around protective covers. These can provide good environmental protection, but can be self destructive if corrosive material is trapped inside the cover. Sealing compounds and rubbers used on some transducers can deteriorate when exposed to chemicals or direct sunlight. Because they embrittle rubber, chlorine-based compounds are a particular problem. Always make sure you keep your sensor maintained and clean to avoid corrosion.

Installation There are several considerations that are often forgotten during the mounting of force transducers. For example, it is a common misconception that a force transducer can be considered as a solid piece of metal on which other parts can be mounted. The performance of a force transducer depends primarily on its ability to deflect repeatably under conditions when load/torque is applied or removed. Make sure all supports are designed to avoid lateral forces, bending moments, torsion moments, off center loading, and vibration. These effects not only compromise the performance of your force transducer, but they can also lead to permanent damage. Also, consider self aligning mounts.

The S-Beam Load Cell Changes Shape Under Load

Force Transducer Cables Special attention should be paid in preventing the transducer cable from being damaged during and after installation. Never carry transducers at their cables and provide dripping loops to prevent water from running directly into the cable entry. Don't forget to provide adequate protection for the cable, near the sensor if possible. Load cells are always produced with a four- or six-wire cable. A four-wire cable is calibrated and temperature compensated with a certain length of cable. The performance of the load cell, in terms of temperature stability, will be compromised if the cable is cut; never cut a four-wire load cell cable! 6 wire cables can be cut, but all wires must be cut evenly to avoid any differences.

Extras What I have talked about is actually a very watered down tutorial for force transducers. If you would like to learn more, read the advanced load cell tutorial. I didn't write it, so good luck! You may also be interested in the data loggingtutorial so that you can log your force/torque sensor data effectively.
SENSORS - CURRENT SENSOR

Current Sensing Current sensing is as it says - sensing the amount of current in use by a particular circuit or device. If you want to know the amount of power being used for any robot component, current sensing is the way to go. Applications Current sensing is not a typical application in robotics. Most robots would never need a current sensing ability. Current sensing is a way for a robot to measure it's internal state and rarely required to explore the outside world. It is useful for a robot builder to better understand power use of the various components within a robot. Sensing can be done forDC motors, circuits, orservos to measure actuator power requirements. It can be done for things likemicrocontrollers to measure power performance in different situations. It can be useful for things likerobot battery monitors. And lastly, robot hand grasp detection devices and collision detection. For example, if the current use suddenly increases, that means a physical object is causing resistance. Methods There are several methods to sense current, each having its own advantages and disadvantages. The easiest method is using a typical benchtop DC power supply.

This device is somewhat expensive as it ranges in the hundreds, but they are very common and you can easily find one available in any typical university lab. These devices are a must for any electrical engineer or robot builder. Operation of this device should be straigtforward. Apply a voltage to your component, and it will quickly give a readout of the current you are drawing. Although this takes seconds and little effort to do, there are a few disadvantages to this method. The first disadvantage is that it is not highly accurate. Usually they can only measure in increments rounded off to the nearest 10mA. This is fine for high powered applications where an extra 5mA does not matter, but for low current draw devices this can be an issue. The next disadvantage is timing. A benchtop power supplies only takes current measurements in set periods of time - usually 3 times a second. If your device draws a steady current over time this is not a problem. But if for example your device ramps from 0 to 3 amps five times a second, the current reading you get will not be accurate. The last major disadvantage is that there is no data logging ability - therefore you cannot analyze any complex current draw data on a computer. The second method is using a digital multimeter.

The digital multimeter is another commonly available device capable of analyzing many different characteristics of your circuit - voltage, current, capacitance, resistance, temperature, frequency, etc. If you do not already have one, you definitely need one to make a robot. It would be like cooking without heat if you didnt have one . . . For cost, they range in price from around $10 to about $100. The price depends on features and accuracy. To measure current, all you do is connect your two leads in series with one of your power source wires. But again, there are disadvantages to this method. Like the benchtop power supplies, digital multimeters suffer timing issues. However,accuracy is usually an extra one or two decimal places better. Good enough for most applications. As for data logging, several available multimeters actually have computer linkup cables so that you may record current data to process later. The last method is using a chip called a Current Sense IC.

This ~$5 chip, using a really tiny resistor and a built in high gain amplifier, outputs a voltage in proportion to the current passing through it. Put the chip in series with what you want to measure, and connect the output to a data logging device such as a microcontroller. The microcontroller can print out data to hyperterminal on your computer, and from there you can transfer it to any data analyzing program you wish (like Excel).

This particular schematic below (click for the full expanded circuit) can measure current use of aservo. But it can easily measure current from any other device with no modification - and even multiple items simultaneously too! The capacitor is optional as it acts as a voltage buffer, ensuring maximum continuous current.

Parts of a Data Logger Typically, data loggers are very simple devices that contain just three basic parts:
1) A sensor (or sensors) measures an event. Anything can be measured. Humidity, temperature, light intensity, voltages, pressure, fault occurrences, etc.

2) A microcontroller then stores this information, usually placing a time-stamp next to each data set. For example, if you have a data logger that measures the temperature in your room over a period of a day, it will record both the temperature and the time that temperature was detected. The information stored on the microcontroller will be sent to a PC using a UART for user analysis. 3) And lastly the data logger will have some sort of power supply, typically a battery that will last at least the duration of the measurements. HyperTerminal HyperTerminal is a program that comes with Windows and works great for data logging. Back in the day it was used for things such as text-based webpage browsing and/or uploading pirated files to your friends over the then super fast 56k modems.

I will now show you how to set HyperTerminal so that you may record data outputted by a microcontroller through rs232 serial. First, you need your microcontroller to handle serial data communications. If it has a serial or USB cable attached to it already, then you are set to continue. Next, you need a program that reads and prints out sensor data to serial as in this example: printf("%u, %u, %lu\r\n", analog(PIN_A0), analog(PIN_A1), get_timer1()); If you are interested, feel free to read more with the printf() function tutorialand the microcontroller UART tutorial .

PROGRAMMING - PRINTF()

Printing Out Data Printing out data from a microcontroller is extremely useful in robotics. You can use it for debugging your program, for use as a data logger, or to have your robot simply communicate with someone or something else. This short tutorial will give you sample code, and explain what everything means. printf() The most convenient way of writing to a computer through serial or to a LCD is to use the formatted print utility printf(). If you have programmed in C++ before, this is the equivalent of cout. Unlike most C functions this function takes a variable number of parameters. The first two parameters are always the output channel and the formatting string; these may then be followed by variables or values of any type. The % character is used within the string to indicate a variable value is to be formatted and output. Variables That Can Follow The % Character When you output a variable, you must also define what variable type is being outputed. This is very important, as for example a variable printed out as a signed long int will often not print out the same as say the same variable printed out as an unsigned int.
c u x X d lu lx LX ld e f Character Unsigned int (decimal) Unsigned int (hex - lower case) Unsigned int (hex - upper case) Signed int (decimal) Unsigned long int (decimal) Unsigned long int (hex - lower case) Unsigned long int (hex - upper case) Signed long int (decimal) Float (scientific) Float (decimal)

End Of Line Control Characters Sometimes you would like to control the spacing and positioning of text that is printed

out. To do this, you would add one of the commands below. I recommend just putting \n\r at the end of all printf() commands.
\n \r \b \' \" \\ \t \v go to new line carraige reset backspace single quote double quote backslash horizontal tab vertical tab

Examples of printf():
printf("hello, world!");

printf("squirels are cute\n\rpikachu is cuter\n\r"); printf("the answer is %u", answer); printf("the answer is %u and %f", answer, float(answer)); printf("3 + 4 = %u", (3+4)); printf("%f, %u\n\r", (10/3), answer); printf("the sensor value is %lu", analog(1));
MICROCONTROLLER UART TUTORIAL RS232 EIA232F TTL and USB Adaptor Examples Tx and Rx Baud Rate, Misc Asynchronous Tx Loop-Back Test $50 Robot UART

What is the UART? The UART, or Universal Asynchronous Receiver / Transmitter, is a feature of

yourmicrocontroller useful for communicating serial data (text, numbers, etc.) to your PC. The device changes incoming parallel information (within the microcontroller/PC) to serial data which can be sent on a communication line. Adding UART functionality is extremely useful for robotics. With the UART, you can add an LCD, bootloading,bluetooth wireless, make a datalogger, debug code, test sensors, and much more! Understanding the UART could be complicated, so I filtered out the useless information and present to you only the useful need-to-know details in an easy to understand way . . . The first half of this tutorial will explain what the UART is, while the second half will give you instructions onhow to add UART functionality to your $50 robot. What is RS232, EIA-232, TTL, serial, and USB? These are the different standards/protocols used from transmitting data. They are incompatible with each other, but if you understand what each is, then you can easily convert them to what you need for your robot.

RS232 RS232 is the old standard and is starting to become obsolete. Few if any laptops even have RS232 ports (serial ports) today, with USB becoming the new universal standard for attaching hardware. But since the world has not yet fully swapped over, you may encounter a need to understand this standard. Back in the day circuits were noisy, lacking filters and robust algorithms, etc. Wiring was also poor, meaning signals became weaker as wiring became longer (relates to resistance of the wire). So to compensate for the signal loss, they used very high voltages. Since a serial signal is basically a square wave, where the wavelengths relate to the bit data transmitted, RS232 was standardized as +/-12V. To get both +12V and -12V, the most common method is to use the MAX232 IC (or ICL232 or ST232 - different IC's that all do the same thing), accompanied with a few capacitors and a DB9 connector. But personally, I feel wiring these up is just a pain . . . here is a schematic if you want to do it yourself (instead of a kit):

EIA232F Today signal transmission systems are much more robust, meaning a +/-12V signal is unnecessary. The EIA232F standard (introduced in 1997) is basically the same as the RS232 standard, but now it can accept a much more reasonable 0V to 5V signal. Almost all current computers (after 2002) utilize a serial port based on this EIA-232 standard. This is great, because now you no longer need the annoying MAX232 circuit! Instead what you can use is something called the RS232 shifter - a circuit that takes signals from the computer/microcontroller (TTL) and correctly inverts and amplifies the serial signals to the EIA232F standard. If you'd like to learn more about these standards, check out this RS232 and EIA232 tutorial (external site). The cheapest RS232 shifter I've found is the $7RS232 Shifter Board Kitfrom SparkFun. They have schematics of their board posted if you'd rather make your own. This is the RS232 shifter kit in the baggy it came in . . .

And this is the assembled image. Notice that I added some usefulwire connectorsthat did not come with the kit so that I may easily connect it to the headers on my microcontroller board. Also notice how two wires are connected to power/ground, and the other two are for Tx and Rx (I'll explain this later in the tutorial).

TTL and USB The UART takes bytes of data and transmits the individual bits in a sequential fashion. At the destination, a second UART re-assembles the bits into complete bytes.

You really do not need to understand what TTL is, other than that TLL is the signal transmitted and received by your microcontroller UART. This TTL signal is different from what your PC serial/USB port understands, so you would need to convert the signal. You also do not really need to understand USB, other than that its fast becoming the only method to communicate with your PC using external hardware. To use USB with your robot, you will need an adaptor that converts to USB. You can easily find converters under $20, or you can make your own by using either the FT232RL or CP2102 IC's.

Signal Adaptor Examples Without going into the details, and without you needing to understand them, all you really need to do is just buy an adaptor. For example: TTL -> TTL to RS232 adaptor -> PC TTL -> TTL to EIA-232 adaptor -> PC TTL -> TTL to EIA-232 adaptor -> EIA-232 to USB adaptor -> PC TTL -> TTL to USB adaptor -> PC TTL -> TTL to wireless adaptor -> wireless to USB adaptor -> PC If you wanted bluetooth wireless, get a TTL to bluetooth adaptor, or if you want ethernet, get a TTL to ethernet adaptor, etc. There are many combinations, just choose one based on what adaptors/requirements you have. For example, if your laptop only has USB, buy a TTL to USB adaptor as shown with mySparkFun Breakout Board for CP2103 USB:

There are other cheaper ones you can buy today, you just need to look around. On the left of this below image is my $15 USB to RS232 adaptor, and the right cable is my RS232 extension cable for those robots that like to run around:

Below is my USB to wireless adaptor that I made in 2007 (although now companies sell them wired up for you). It converts a USB type signal to a TTL type signal, and then my

Easy Radio wireless transmitter converts it again to a method easily transmitted by air to my robot:

And a close-up of the outputs. I soldered on a male header row and connected the ground, Tx, and Rx to my wireless transmitter. I will talk about Tx and Rx soon:

Even my bluetooth transceiver has the same Tx/Rx/Power/Ground wiring:

If you have a CMUcam or GPS, again, the same connections. Other Terminology . . .

Tx and Rx As you probably guessed, Tx represents transmit and Rx represents receive. The transmit pin always transmits data, and the receive pin always receives it. Sounds easy, but it can be a bit confusing . . . For example, suppose you have a GPS device that transmits a TTL signal and you want to connect this GPS to your microcontroller UART. This is how you would do it:

Notice how Tx is connected to Rx, and Rx is connected to Tx. If you connect Tx to Tx, stuff will fry and kittens will cry. If you are the type of person to accidentally plug in your wiring backwards, you may want to add a resistor of say ~2kohm coming out of your UART to each pin. This way if you connect Tx to Tx accidentally, the resistor will absorb all the bad ju-ju (current that will otherwise fry your UART).

Tx pin -> connector wire -> resistor -> Rx pin And remember to make your ground connection common!

Baud Rate Baud is a measurement of transmission speed in asynchronous communication. The computer, any adaptors, and the UART must all agree on a single speed of information 'bits per second'. For example, your robot would pass sensor data to your laptop at 38400 bits per second and your laptop would listen for this stream of 1s and 0s expecting a new bit every 1/38400bps = 26us (0.000026 seconds). As long as the robot outputs bits at the predetermined speed, your laptop can understand it. Remember to always configure all your devices to the same baud rate for communication to work! Data bits, Parity, Stop Bits, Flow Control The short answer: don't worry about it. These are basically variations of the signal, each with long explanations of why you would/wouldn't use them. Stick with the defaults, and make sure you follow the suggested settings of your adaptor. Usually you will use 8 data bits, no parity, 1 stop bit, and no flow control - but not always. Note that if you are using a PIC microcontroller you would have to declare these settings in your code (google for sample code, etc). I will talk a little more about this in coming sections, but mostly just don't worry about it. Bit Banging What if by rare chance your microcontroller does not have a UART (check the datasheet), or you need a second UART but your microcontroller only has one? There is still another method, called bit banging. To sum it up, you send your signal directly to a digital input/output port and manually toggle the port to create the TTL signal. This method is fairly slow and painful, but it works . . .

Asynchronous Serial Transmission As you should already know, baud rate defines bits sent per second. But baud only has meaning if the two communicating devices have a synchronized clock. For example, what if your microcontroller crystal has a slight deviation of .1 second, meaning it thinks 1 second is actually 1.1 seconds long. This could cause your baud rates to break! One solution would be to have both devices share the same clock source, but that just adds extra wires . . . All of this is handled automatically by the UART, but if you would like to understand more, continue reading . . .

Asynchronous transmission allows data to be transmitted without the sender having to send a clock signal to the receiver. Instead, the sender and receiver must agree on timing parameters in advance and special bits are added to each word which are used to synchronize the sending and receiving units. When a word is given to the UART for Asynchronous transmissions, a bit called the "Start Bit" is added to the beginning of each word that is to be transmitted. The Start Bit is used to alert the receiver that a word of data is about to be sent, and to force the clock in the receiver into synchronization with the clock in the transmitter. These two clocks must be accurate enough to not have the frequency drift by more than 10% during the transmission of the remaining bits in the word. (This requirement was set in the days of mechanical teleprinters and is easily met by modern electronic equipment.)

When data is being transmitted, the sender does not know when the receiver has 'looked' at the value of the bit - the sender only knows when the clock says to begin transmitting the next bit of the word. When the entire data word has been sent, the transmitter may add a Parity Bit that the transmitter generates. The Parity Bit may be used by the receiver to perform simple error checking. Then at least one Stop Bit is sent by the transmitter. When the receiver has received all of the bits in the data word, it may check for the Parity Bits (both sender and receiver must agree on whether a Parity Bit is to be used), and then the receiver looks for a Stop Bit. If the Stop Bit does not appear when it is supposed to, the UART considers the entire word to be garbled and will report a Framing Error to the host processor when the data word is read. The usual cause of a Framing Error is that the sender and receiver clocks were not running at the same speed, or that the signal was interrupted.

Regardless of whether the data was received correctly or not, the UART automatically discards the Start, Parity and Stop bits. If the sender and receiver are configured identically, these bits are not passed to the host. If another word is ready for transmission, the Start Bit for the new word can be sent as soon as the Stop Bit for the previous word has been sent. In short, asynchronous data is 'self synchronizing'.

The Loop-Back Test The loop-back test is a simple way to verify that your UART is working, as well as to locate the failure point of your UART communication setup. For example, suppose you are transmitting a signal from your microcontroller UART through a TTL to USB converter to your laptop and it isn't working. All it takes is one failure point for the entire system to not work, but how do you find it? The trick is to connect the Rx to the Tx, hence the loop-back test. For example, to verify that the UART is outputting correctly:
o o o

connect the Rx and Tx of the UART together printf the letter 'A' have an if statement turn on a LED if 'A' is received

If it still doesn't work, you know that your code was the failure point (if not more than one failure point). Then do this again on the PC side using HyperTerminal, directly connecting Tx and Rx of your USB port. And then yet again using the TTL to USB adaptor. You get the idea . . . I'm willing to bet that if you have a problem getting it to work, it is because your baud rates aren't the same/synchronized. You may also find it useful to connect your Tx line to an oscilloscope to verify your transmitting frequency:

Top waveform: UART transmitted 0x0F Bottom waveform: UART received 0x0F

Adding UART Functions to AVR and your $50 Robot To add UART functionality to your $50 robot(or any AVR based microcontroller) you need to make a few minor modifications to your code and add a small amount of extra hardware. Full and Half Duplex Full Duplex is defined by the ability of a UART to simultaneously send and receive data. Half Duplex is when a device must pause either transmitting or receiving to perform the other. A Half Duplex UART cannot send and receive data simultaneously. While most microcontroller UARTs are Full Duplex, most wireless transceivers are Half Duplex. This is due to the fact that it is difficult to send two different signals at the same time under the same frequency, resulting in data collision. If your robot is wirelessly transmitting data, in effect it will not be able to receive commands during that transmission, assuming it is using a Half Duplex transmitter. Please check out the step-by-step instructions onhow to add UART functionality to your $50 robot >>>.

The time stamp isnt always necessary, and you can always add or remove ADC (analog to digital) inputs. Note that the get_timer1() command must be called right before, during, or directly after the sensor readings - or the time recorded will be meaningless.

Also, use commas to separate output data values as already shown. I will explain the importance of this later. Steps to Logging With HyperTerminal Now open up HyperTerminal Start => Programs => Accessories => Communications => HyperTerminal You should see this window:

Type in a desired name and push OK. The icon doesn't matter.

Now select a COM port, while ignoring the other options. Push OK.

In the COM properties, select the options you want/need depending on the microcontroller and serial communications setup you have. Push OK. Chances are if you just change the Bits per second to 115200 and leave the other options as default it should work fine. To make sure, check your C code header for a line that looks something like this: #use rs232(baud=115200, parity=N, bits=8, xmit=PIN_C6, rcv=PIN_C7) Now in the menu, select Transfer => Capture Text... create a text file, and select it in the Capture Text window click Start Now connect your microcontroller to your computer (by serial/usb) and then turn the microcontroller on. Next you want to tell HyperTerminal to Call. Select the image of the phone:

Finally, tell your microcontroller to start logging data, and you will see the data appear on screen. Even if you do not plan to save your data, this can be a great feedback tool when testing your robot.

When logging is completed, click the disconnect button:

Then select Transfer => Capture Text => Stop You should now have a .txt file saved with all your data. But you're not done yet!

Rename the file to a .csv, or Comma Separated Value (CSV) file format. What this does is allows you to open the file in Excel with each value seperated by columns and rows, making data processing much easier. Now you may interpret the sensor data any way you like.
ROBOT SENSOR INTERPRETATION

Robot Sensor Interpretation Most roboticists understand faily well how sensors work. They understand that mostsensors give continuous readings over a particular range. Most usually understand the physics behind them as well, such as speed of sound forsonar or sun interference for IR. Yet most do not understand how to interpret sensor data into a mathematical form readable by computers. Roboticists would just make case based situations for their sensors, such as 'IF low reading, DO turn right' and 'IF high reading, DO turn left.' That is perfectly ok to use . . . unless you want fine angle control. The other problem with case based programming is if your sensor reading bounces between two cases, your robot will spass out like crazy (oscillate). Most amazingly, to do fine angle control is actually almost just as simple. There are only 3 steps you need to follow:
o o o

Gather Sensor Data (data logging) Graph Sensor Data Generate Line Equation

The first step is incredibly simple, just somewhat time consuming. Graphing just takes minutes. And generating the line equation is usually just a few clicks of your mouse. Gather Sensor Data This is fairly straight forward. Do something with your sensor, and record it's output using Excel. If you have a range sensor (such as sonar orSharp IR), record the distance

of the object in front of it and the range data output. If you have a photoresistor, record the amount of light (probably arbitrarily . . . # of candles maybe?) and the sensor data from it. If you have a force sensor, apply weight to it, record the weight, and yes, the data. This is very simple and probably brain deadening easy, but there are a few things you will have to watch out for. First is non-continuity. Some sensors (such as sonar and Sharp IR) do not work properly at very close range. Stupid physics, I know. The next is non-linearity. For example, your sensor readings may be 10, 20, and 30. But the distance might be 12cm, 50cm, and 1000cm. You will have to watch for these curves. Usually however they occur only near the minimum and maximum values a sensor can read. Then there is sensor noise. Five readings in the same exact situation could give you five near yet different values. Verify the amount of sensor noise you have, as some sensors can have it fairly bad. The way to get rid of noise is get a bunch of readings, then only keep the average. Make sure you test for noise in the actual environment your robot will be in. Obvious, but some desktop robot builders forget. The last issue you will have is the number of data points to record. For perfectly linear sensors you only need to record the two extremes, and draw a line between them. However since this is almost always not the case, you should record more. You should always record more points the more non-linear your sensor is. If your sensor is non-linear only at certain cases, record extra points just in those cases of concern. And obviously, the more points you have recorded, the more accurate you can get your sensor representation. However do you really need 10,000 points for a photoresistor? Its a balance. Graph Sensor Data Ok now that you have all your data recorded in two columns in Excel, now you need to graph it. But this is simple. 1) First scroll with your mouse and highlight the cells with data in the first column. 2) Then hold Ctrl and scroll the cells in the other column of data. You should now have two columns seperately highlighted. 3) Next click the graph button in the top menu. 4) A window should open up. Select XY (Scatter). Then in Chart sub-type select the middle left option. Its the one with curvy lines with dots in them. Click next. 5) If you want to compare multiple robot sensors, use the Series tab. Otherwise just use the Data Range tab. Make sure the 'Series in: Columns' is selected. Click next. 6) Pick the options you want, label your chart, etc. Click next and finish. A chart should now appear. 7) Still confused? Download my excel sensor graph examples.

There are some possible graphs you may see with your sensors:

This above graph is of a linear sensor. There is a straight line, so a simple 10th grade y=x*c+d equation can predict distance given a data value.

This above graph is non-continuous and non-linear. You will see crazy stuff happen at the beginning and end of the graph. This is because the sensor cannot read at very close distances or very far distances. But it is simpler than it looks. Crop off the crazy stuff, and you will get a very simple non-linear x=y^2 line. You basically need to make sure that your sensors do not encounter those situations, as a program would not be able to distinguish them from a normal situation.

Although this above graph looks simple, it can be a little tricky. You actually have two different lines with two different equations. The first half is an x=y^2 equation and the second half is a linear equation. You must do a case based program to determine which equation to use for interpreting data. Or if you do not care about accuracy too much, you can approximate both cases as a single linear equation. Generate Line Equation After determining what kind of graph you have, now all you need to do is use the excel trendline ability. Basically this will convert any line into a simple equation for you. 1) If there is no non-continuities (kinks in the graph), right click the line in the graph, and click 'Add Trendline..." If you do have a non-continuity, seperate the non-continuous lines and make two graphs. That way each can be interpreted individually. If you do not care about error, or the error will be small, one graph is fine. 2) Now select the Trend/Regression type. Just remember, although more complex equations can reduce error, it increases computation time. And microcontrollerusually can only handle linear and exponential equations. Click OK and see how well the lines fit. 3) Now click the new trendline and click 'Format Trendline.' A new window should appear. 4) Go to the Options tab and check the box that says 'Display equation on chart.' Click OK. 5) There you have it, your equation that you can use on your robot! Given any x data value, your equation will pump out the exact distance or light amount or force or whatever. Load Cell Linearity Graph Example This is a graph and equation I generated using a Load Cell (determines force). I had to put the sensor in a voltage amplifier to get a good measurable voltage.

Additional Info On Data Logging There are many ways to log data, depending on the situation. There is event based data logging, meaning that it only records data when a specific single-instant event occurs. This event could be a significant change in sensor output or a passing of a user defined threshold. The advantage of this method is that it significantly reduces data that needs to be stored and analyzed. The other method is selective logging, which means logging will occur over just a set period of time (usually a short period of time). If for example you want to analyze an event, your data logger would start logging at the beginning of the event and stop at the end. The advantage of this method is that you can get high resolution data without wasting memory. Can I Buy a Data Logger for My PC? Of course. They are called DAQ, or Data Acquisition devices, and have a lot of neat built in software and hardware to make things easier for you. But they can get costly, ranging in the $100's.
PROGRAMMING - DIFFERENTIAL DRIVE

What is a Differential Drive Robot? Differential drive is a method of controlling a robot with only two motorized wheels. What makes this algorithm important for a robot builder is that it is also the simplest control method for a robot. The term 'differential' means that robot turning speed is determined by the speed difference between both wheels, each on either side of your robot. For example: keep the left wheel still, and rotate the right wheel forward, and the robot will turn left. If you are clever with it, or use PID control, you can get interesting curved paths just by varying the speeds of both wheels over time. Dont want to turn? As long as both wheels go at the same speed, the robot does not turn - only going forward or reverse.
PROGRAMMING - PID CONTROL

PID Control A proportional integral derivative controller (PID controller) is a common method of controlling robots. PID theory will help you design a better control equation for your robot. Shown here is the basic closed-loop (a complete cycle) control diagram:

The point of a control system is to get your robot actuators (or anything really) to do what you want without . . . ummmm . . . going out of control. The sensor (usually anencoder on the actuator) will determine what is changing, the program you write defines what the final result should be, and the actuator actually makes the change. Another sensor could sense the environment, giving the robot a higher-level sense of where to go. Terminology To get you started, here are a few terms you will need to know: error - The error is the amount at which your device isnt doing something right. For example, if your robot is going 3mph but you want it to go 2mph, the error is 3mph-2mph = 1mph. Or suppose your robot is located at x=5 but you want it at x=7, then the error is 2. A control system cannot do anything if there is no error - think about it, if your robot is doing what you want, it wouldnt need control!

proportional (P) - The proportional term is typically the error. This is usually the distance you want the robot to travel, or perhaps a temperature you want something to be at. The robot is at position A, but wants to be at B, so the P term is A - B. derivative (D) - The derivative term is the change in error made over a set time period (t). For example, the error was C before and now its D, and t time has passed, then the derivative term is (C-D)/t. Use the timer on your microcontrollerto determine the time passed (see timer tutorial).
PROGRAMMING - TIMERS

Timers for Microcontrollers The timer function is one of the basic features of a microcontroller. Although some compilers provide simple macros that implement delay routines, in order to determine time elapsed and to maximize use of the timer, understanding the timer functionality is necessary. This example will be done using the PIC16F877 microcontroller in C. To introduce delays in an application, the CCSC macro delay_ms() and delay_us() can be used. These macros provide an ability to block the MCU until the specified delay has elapsed. But what if you instead want to determine elapsed time for say a PID controller, or adata logger? For tasks that require the ability to measure time, it is possible to write code that uses the microcontroller timers. The Timer Different microcontrollers have different numbers and types of timers (Timer0, Timer1, Timer2, watchdog timer, etc.). Check the data sheet for the microcontroller you are using for specific details. These timers are essentially counters that increment based on the clock cycle and the timer prescaler. An application can monitor these counters to determine how much time has elapsed.

On the PIC16F877, Timer0 and Timer2 are 8-bit counters whereas Timer1 is a 16-bit counter. Individual timer counters can be set to an arbitrary value using the CCSC macro set_timer0, set_timer1, or set_timer2. When the counter reaches its limit (255 for 8-bit and 65535 for 16-bit counters), it overflows and wraps around to 0. Interrupts can be generated when wrap around occurs, allowing you to count these resets or initiate a timed event. Timer1 is normally used for PWM or capture and compare functions. Each timer can be configured with a different source (internal or external) and a prescaler. The prescaler determines the timer granularity (resolution). A timer with a prescaler of 1 increments its counter every 4 clock cycles - 1,000,000 times a second if using a 4 MHz clock. A timer with a prescaler of 8 increments its counter every 32 clock cycles. It is recommended to use the highest prescaler possible with your application. Calculating Time Passed The equation to determine the time passed after counting the number of ticks would be:
delay (in ms) = (# ticks) * 4 * prescaler * 1000 / (clock frequency)

for example . . . Assume that Timer1 is set up with a prescaler of 8 on a MCU clocked at 20 MHz. Assume that a total of 6250 clicks were counted. then . . .
delay (in ms) = (# ticks) * 4 * 8 * 1000 / (20000000)

delay (in ms) = (6250) / 625 = 10 ms Code in C First you must initialize the timer:
long delay;

setup_timer_0(T0_INTERNAL | T0_DIV_BY_8); //Set Timer0 prescaler to 8 now put this code in your main loop:
set_timer0(0); //reset timer to zero where needed

printf("I eat bugs for breakfast."); //do something that takes time //calculate elapsed time in ms, use it for something like PID delay = get_timer0() / 625;

//or print out data and put a time stamp on it for data logging printf("%u, %u, %lu\r\n", analog(PIN_A0), analog(PIN_A1), get_timer0()); Note that it is very important that you do not call the get_timer0() command until exactly when it is needed. In the above example I call the timer in my printf()statement - exactly when I need it. Timer Overflow You should also be careful that the timer never overflows in your loop or the timer will be wrong. If you expect it to overflow, you could call a timer overflow interrupt that counts the number of overflows - each overflow being a known set of time depending on your prescaler. In CCSC, interrupt service routines are functions that are preceded with #int_xxx. For instance, a Timer1 interrupt service routine would be declared as follows:
#int_timer1

//timer1 has overflowed void timer1_interrupt()


{ //do something quickly here //maybe count the interrupt //or perform some task //good practice not stay in interrupt too long }

To enable interrupts, the global interrupt bit must be set and then the specific interrupt bits must be set. For instance, to enable Timer0 interrupts, one would program the following lines right after the timer is initialized:
enable_interrupts(GLOBAL); enable_interrupts(INT_TIMER0);

If you want to stop the application from processing interrupts, you can disable the interrupts using the disable_interrupts(INT_TIMER0) CCSC macro. You can either disable a specific interrupt or all interrupts using the GLOBAL define. Timer Delay Here is another code sample that shows how to create a delay of 50 ms before resuming execution (alternative to delay_ms):
setup_timer_0(T0_INTERNAL | T0_DIV_BY_8); //Set Timer0 prescaler to 8

set_timer0(0); //reset timer while (get_timer0() < 3125); // wait for 50ms

integral (I) - The integral term is the accumulative error made over a set period of time (t). For example, your robot continually is on average off by a certain amount all the time, the I term will catch it. Lets say at t1 the error was A, at t2 it was B, and at t3 it was C. The integral term would be A/t1 + B/t2 + C/t3. tweak constant (gain) - Each term (P, I, D) will need to be tweaked in your code. There are many things about a robot that is very difficult to model mathematically (ground friction, motor inductance, center of mass, ducktape holding your robot together, etc.). So often times it is better to just build the robot, implement a control equation, then tweak the equation until it works properly. A tweak constant is just a guessed number that you multiple each term with. For example, Kd is the derivative constant. Idealy you want the tweak constant high enough that your settling time is minimal but low enough so that there is no overshoot. P*Kp + I*Ki + D*Kd

What you see in this image is typically what will happen with your PID robot. It will start with some error and the actuator output will change until the error goes away (near the final value). The time it takes for this to happen is called the settling time. Shorter settling times are almost always better. Often times you might not design the system properly and the system will change so fast that it overshoots (bad!), causing some oscillation until the system settles. And there will usually be some error band. The error band is dependent on how fine a control your design is capable of - you will have to program your robot to ignore error within the error band or it will probably oscillate. There will always be an error band, no matter how advanced the system. ignoring acceptable error band example:

if error <= .000001 //subjectively determined acceptable then error = 0; //ignore it

The Complete PID Equation Combining everything from above, here is the complete PID equation: Actuator_Output = Kp*P + Ki*I + Kd*D or in easy to understand terms: Actuator_Output =
tweakA * (distance from goal) + tweakB * (change in error) + tweakC * (accumulative error)

Simplifications The nice thing about tuning a PID controller is that you don't need to have a good understanding of formal control theory to do a fairly good job of it. Most control situations will work with just an hour or so max of tuning. Better yet, rarely will you need the integral term. Thats right, just delete and ignore it! The only time you will need this term is when acceleration plays a big factor with your robot. If your robot is really heavy, or gravity is not on it's side (such as steep hills), then you will need the integral term. But out of all the robots I have ever programmed, only two needed an integral term - and both robots were over 30 lbs with a requirement for extremely high precision (millimeter or less error band). Control without the integral term is commonly referred to as simply PD control. There are also times when you do not require a derivative term, but usually only when the device mechanical stabalizes itself, works at very low speeds so that overshoot just doesnt happen, or you simply dont require good precision. Sampling Rate Issues The sampling rate is the speed at which your control algorithm can update itself. The faster the sampling rate, the higher precision control your robot will have. Slower sampling rates will result in higher settling times and an increased chance of overshoot (bad). To increase sampling rate, you want an even faster update of sensor readings, and minimal delay in your program loop. Its good to have the robot react to a changing environment before it drives off the table, anyway. Humans suffer from the sampling rate issue too (apparently drinking reduces the sampling rate, who would have guessed?).

The rule of thumb is that the sample time should be between 1/10th and 1/100th of the desired system settling time. For a typical homemade robot you want a sampling rate of about 20+/second (very reasonable with today's microcontrollers).

The differential drive algorithm is useful for light chasing robots. This locomotion is the most basic of all types, and is highly recommended for beginners.Mechanical construction, as well as the control algorithm, cannot get any simpler than this.
BASIC ROBOT MECHANICS TUTORIALS

Brazing (like welding, but way easier) Ever wanted to weld but always had a good reason not to? Brazing is a much better option when making small and medium size robots. The power of welding for under ~$60, and easy enough for a complete novice to self-learn. FEA Finite Element Analysis Tutorial Learn how computers can be used to computationally simulate and optimize the mechanical structure of your robot. Robot Chassis Construction Do you know the parts consisting of a robot, but not quite sure how to put them all together? Need to know how to say, attach a wheel to a motor? This is what you need to know to construct a basic differential drive robot chassis. Robot Suspension System The problem with a typical suspension system is that its very complex and involves many parts. As a solution, I invented a new type of robot suspension consisting of one single flexible part. Theory: Statics Want to optimize your robot parameters mathematically? Want to verify that an expensive motor you are about to purchase has enough torque? Calculate things such as moment arms, gearing, friction, torque, and more. Theory: Dynamics Mathematically optimize your robot further by calculating things such as velocity, acceleration, and momentum. Theory: Energy Mathematically optimize your robot battery by calculating the required energy your robot needs to perform.

Gears, Sprockets, and Chains Learn theory behind gears. Understand how to calculate gear ratios, torque, efficiency, and rotational velocity.

Note that this algorithm doesnt just work for wheeled robots, but is also the same algorithm you must use for tank tread type robots and biped robots. For examples of differential drive robots, see my sumo robot, mobipulator robot, and robot boat.
SUMO ROBOT CONSTRUCTION - STAMPY

Sumo Robots This robot has been designed for the DC Sumo Competition for January of 2007. Unlike your typical sumo robot competition, the rules limit brute speed and strength. By imposing speed limits on all robots, it eliminates the 'smash and bash' in exchange for increased robotic intelligence and sensors use. In the spirit of sumo, I named my robot Stampy.

Strategy Never go into battle without a clever strategy. Stampy employs a clever strategy of using the opponents force against itself. Instead of pushing back by a frontal collision, it tries to wedge its ramp under just a single wheel of its opponent.

As the opponent robot continues to drive, it will start to tilt over. As this happens, Stampy changes direction and continues to push the opponent the rest of the way over. Its not about pushing the opposing robot out of the ring, but instead flipping it over. Just watch the video before continuing: Algorithm The algorithm takes data from a scanning Sharp IR Rangefinder, and uses edge detection to decide where to drive. If no target is available, it would go into a search mode and spin clockwise. Otherwise, it would attempt to drive towards the left side of the opposing robot. This is the cheapest and simplest method for a robot to locate and follow other objects. pseudocode:
//scanner code if sharp IR detects object scanning IR turns left else //no object detected scanning IR turns right

//robot motion code if scanner is pointing far left robot turns left else if scanner is pointing far right robot turns right else //scanner pointing forward robot drives straight

As shown, the scanner goes left if it sees a googly-eyed robot. If it doesnt detect it, the scanner turns right until it does. As a result, the scanner converges on the left edge of the googly-eyed robot:

The algorithm is guaranteed to converge on a stable point if the scanner locates the object from the left edge. But if the object is detected on the right edge, there is no convergence. This can potentially cause a problem:

The solution: if the sensor misses the object and rotates to the right to its maximum position, tell the scanner to reset its angle to the far left. There is also the solution of using two scanners so that there is a convergence on both the left and right edges of the object. But for the purposes of sumo I couldnt do this. My commented sumo robot source code is available for download. Stealth Technology As your opponent robots get smarter, and start using sensors to detect your robot, countermeasures need to be applied. Stampy employs three such stealth technologies. IR/LED Defence Two popular methods of robot sensors are infrared emitter/detectorsand photoresistors with reflective LED's. To counter these sensors, you need to coat your robot in an IR/visible light absorbant paint. Make sure the paint has a rough coat, as a shiny coat is more reflective. I used black acrylic paint, but I didnt test it to see how well it works. Another popular sensor used is theSharp IR Rangefinder. This sensor is significantly more immune to surface color, but it will still have some decreased accuracy.

Sonar Sonar is the other popular method of sensing for sumo robots. Sonar has two weaknesses - the 'softness' of the target, and the angle of the target. Coat your robot in sound absorbing pointy shaped foam.

The other method is similar to stealth aircraft - use flat surfaces and sharp angles to deflect the sonar. I extended the ramp across the entire robot frontal area for sonar reflection, and use black foam wheels for light and sound absorption (plus, foam is good for traction). These methods will significantly decrease the detectable range of your robot.

Active Sensor Avoidance Sensor avoidance is where your robot either sees or predicts where the enemy robot sensors are directed, and then moves away from it. Stampy avoids at all times from being in front of the target robot, and instead side swipes. A defence sensor avoidance would be to use a scanning sensor instead of a fixed one, such as what Stampy employs. Standing Up to Save Space All sumo competitions have restrictive length and width limits, but rarely any restrictive height limits. To make Stampy eligible for these other competitions (and just for

demonstration), it was designed to start off vertical, and flip itself down for attack mode. By starting off in vertical mode, the length restriction is bypassed.

For a ramp based sumo robot, you want all the weight up at the front (to push the ramp down). So when in vertical position, all the weight is high up. To go to horizontal position, Stampy simply drives backwards for half a second. Assembly My very first step to building Stampy was to design it in CAD:

Feel free to explore the 3D CAD by clicking with your mouse:

You can download the CAD files for AutoDesk (2.2mb), or if you do not have AutoDesk, you can download the design drawings. You may also download the Stampy DWF.

My next step was to prepare my parts for CNC machining. Most of the parts are fairly simple and do not require CNC, but I wanted more practice at CNC and for it to look professional. I usedEdgeCAM for CNC simulation and G-Code generation:

I have included the G-Code for both of those parts. It was written for the Haas Mill, but with minor modifications it would work on any CNC machine. Note that the part called Frame had additional holes drilled into it, and two fillets (located by the servo) removed with a mill bit, with a desktop drill press. Materials used was what I had around, sheets of HDPE,aluminum, and copper. assembly images:

all of the parts disassembled

attached sides, bottom, and servos

attached battery by velcro, and attached the lower ramp

screwed in wheels, and added spacers for electronics mount

assembled scanner

added upper ramp, attached electronics, attached velcro for scanner

attached scanner, remachined the side Frame (there were errors), and painted ramp black with acrylic paint

Time Log Total time required to create this robot was ~25 hours. The meticuously recorded breakdown of hours: time to CAD: 2 hours 40 minutes programming: 4 hours 20 minutes part machining: 4 hours creating CNC g-code: 45 minutes CNC: 2 hours 30 minutes assembly: 1 hour 30 minutes fixing mistakes: 4 hours (remachining and assembling Frame) test programming: 3 hours 40 minutes video filming: 1 hour Additional notes on time: Since I reused a lot of my CAD files, code, and parts from older robots, my time log is a bit skewed. It probably saved me an additional 10+ hours of work. Cost/List of Parts As I already owned all parts that were used on this robot, I spent $0 to produce the entire thing.

But if I were to go out and purchase parts, this would be the rundown:

Cerebellum Microcontroller: $60 (no longer sold) HS-311 Servo: $10 Sharp IR: $15 HDPE, Aluminum, Copper Sheeting: $20 1800mAh 6V NiMH Battery: $13 2 Foam Wheels: $8 Spacers, screws, velcro: $10 Two HS-225MG Servos: $28 each Acrylic paint: $5

Total: ~$197 Bonus Video! Just for fun I decided to have a battle between the$50 Dollar Robot (remote control mode, driven by me) and Stampy. This shows autonomous robots kick bot! (ok sorry, really bad pun . . .)
Wall Climbing and Manipulating Robot

The ASME Student Design Competition This 'mobipulator' robot (mobile manipulator) was built for the ASME Student Design Competition in 2004, by me and three friends from CMU. It was a one time only thing, because each year ASME comes up with a new competition. As such I will not go into details about the competition and how to win it. However, I will talk about what we learned that could be used by you as a reader, if you had a similar design problem. As such, the purpose of this short tutorial is to demonstrate a unique method of wall climbing, and show dual functionality of a 1 DOF robot arm (not counting the gripper). Ok technically it is 3 DOF, because of the moving/rotating robot frame . . . The rules basically went as this:
o o o o o o

gather and retrieve objects of unknown size/weight battery power is severely limited robot can be remote controlled allowable robot height/length/width limited further objects gained you more points time limit of 3 minutes

Without further boring descriptions, watch the video!

Specifications The robot used one HS-311 servo for the actuated storage bucket, one modified HS805BB for the 1 DOF robot arm, one servo for the robot gripper end-effector, and two modified servos for the differential drive train.

The bucket was built from bended aluminum sheet metal, and the frame was both milled and CNCed out from aluminum raw material. Specially shaped foam was used inside the bucket to keep the objects inside from rolling out while wall climbing.

1) The wheels on each side were linked together by a timing belt, for tank style driving. 2) Rubberband was used as belt tensioner. 3) The four wheels were custom CNC machined. 4) Conveyor belt material was glued onto the wheels and grippers for its high friction properties. 5) RC reciever antenna, wrapped so as to not tangle

Control, and the Driver The remote control was a Hitec Laser 6 (has 6 channels), each channel controlled each servo individually. The agility of a remote control robot is very much a function of driver skill. If you ever have a remote control robot contest, driver skill can significantly affect robot performance. Practice practice practice. Know exactly how your robot will perform. Practice in realistic settings, too. We went as far as to reconstruct the test course ourselves, timing everything the robot did for speed optimization, and pushing the limits to see what the robot can do. In the video I was operating 5 servos simultaneously with two hands on the remote, a skill that took many many hours of practice to do. But it all paid off . . . An image of a prototype version climbing a wall in our recreated test course:

You probably did not gather this from the video, but the arm was used as a balancing weight shift as it climbed the wall - not just a lifting mechanism. The claws also had to be opened up during the climb, too, so as to not break. This early plastic-made prototype version attempted to climb the wall before we learned about the weight shift feature of the arm. Embarassingly, the basket was lowered accidently and the bot got stuck on its way over. The gripper on this version was made from nylon, and broke during the climb.

Difficulties There were two main difficulties. The first is that the conveyor belt material, in combination with tank style driving friction issues, made turning on carpeting very difficult. At the end of the video you will notice the robot doing weird dancing like motions as it turns around. This is driver skill attempting to compensate for this problem. The other major problem was arm torque. A lot of effort was put into making the arm very light yet strong enough to support the weight of the robot while wall climbing. If you plan to make one of your own, make sure you do themoment arm calculations first to

ensure it is strong enough. We had to gear down the servo with external gears to have just barely enough torque . . . Results So how did we fare? Beating out 12 teams in regionals, the team (four of us) made it to nationals in California where we placed 7th (out of 14). Competition was really impressive and I wish I had videos to show it . . . The SolidWorks CAD file of this robot is available for download (7.1mb). If you use the CAD (or any part of it) for your robot, please give credit and link back to this page.
ROBOT BOAT TUTORIAL

About the Robot Boat, and Thailand This robot boat was originally built for a Thai art contest, for the 2006Loy Krathong celebration. An art contest? A Loy what what? Thai?!? Ok let me explain . . . Loy Krathong is the second most popular Thai holiday of the year, right behind the Songkran festival - the country wide all day long waterfight . . . So Loy Krathong, pronounced Loy Gratawng, is when Thai people get to make little pretty floating things out of banana leaves or whatever and place a candle on it. They then Loy (float) the Krathong (the floating thing) out onto the river. There are lots of stories behind it . . . But basically if your Krathong floats down the river past where the eye can see, and the candle doesnt go out, then you and your girlfriend/boyfriend will live together forever in happiness. Thai version of Valentines Day, I guess.

So they have a contest every year for who can make the prettiest and coolest Krathongs. Sounded like fun, but me being an engineer I can do better. What if you can remote control your Krathong past the horizon? [evil grin] Well there isn't much I can 'robot up' in a floating banana leaf, so instead I decided to model my Krathong on the Thai Royal Barge - basically a giant canoe for the Thai king. This is what it looks like:

Now that you understand why my boat looks so odd, here is the video . . . And now for the important stuff . . . Designing a Robot Boat There are several important things you must consider when building a robot boat. There are weight issues, balance issues, hydrodynamics, waterproofing, the actuator, and sensing problems. There are many different designs you can do, so I will just go over what I did and what I learned and hope you can apply it to your own project.

Weight Weight is important when loading your robot boat with various equipment. It needsmotors, batteries, sensors,controllers, it all adds up. If the weight is too much, the boat will sink dangerously low - or at least add a lot of drag to its movement. Fortunately, this is an easy calculation to do. First add up the weight of all components for the entire boat, including the hull. Then estimate the desired length and width of your boat hull. Here is how I derived the calculation to determine how much your boat will sink under a given weight and hull dimensions: density * volume = mass density of water * boat volume under water = boat weight density of water * length * width * depth = boat weight sinking depth = boat weight / (density of water * boat length * boat width)
* Note that my hull was very close to a rectangular cube, so if your hull has a different shape, use the averagehull width expected to go underwater.

You want the sinking depth of your boat to be as minimal as possible, but yet deep enough the actuators can go into the water. If you are making a robot sub, then you want the sinking depth to equal the height of the sub to obtain neutral bouyancy. Density of water is about 62 lbs/foot, and in salt water about 64 lbs/foot. Hull Design The rule of thumb is to imitate the shapes of actual boats, as they have been well tested by real engineers. The hull of my boat was built out of a block of pink foam core, typically used for home insulation, and available super cheap at any local Home Depot. I found mine in a scrap pile in the back, but they sell door size blocks for under $15. To cut the foam core I used a bandsaw, and then smoothed the hull down withfine grain sandpaper.

I then painted over it with a single layer of acrylic paint for increased water protection, but also to increase the highly quantitative 'pretty-ness factor.' There are several types of hull shapes, so here is a quick description of each:
Flat Bottom Hull - Easiest to make and low water resistance. Wobbles a lot in waves. Examples of flat bottom boats might be Jon boats, small utility boats, and some high-speed runabouts.

Vee Bottom Hull - Very little wobble in waves, but more water resistance. Many runabouts use the vee-bottom design.

Round Bottom Hull - Works well at slow speeds, but requires a keel/stabilizer. Many trawlers, canoes and sailboats have round bottoms.

Multi-Hull - The most stable of all designs, but also the most complicated. Catamarans, trimarans, pontoon boats and some houseboats carry the multi-hull design. The wide stance provides greater stability.

The bottom of my hull was completely flat, as making other shaped hulls can cause complications. Things to keep in mind when choosing a hull is balance (top heavy = bad), turning speed, resistance against bobbing in waves and blowing over in wind, and manufacturability (complex shapes could be more effort than they are worth). I would say my flat bottom design worked well, but was not stable enough in strong winds. Actuators The Royal Thai Barges use human paddlers for actuation, but that wasnt possible for the robot so I need another method. To keep it simple and remain as true as possible to how it should look, I chose the paddle wheel. Despite not even remotely attempting to optimize the paddle wheel, it performed incredibly well. I wanted to quickly make the boat remote controlfor the competition, so this required me to use servos(learn how to waterproof a servo). Higher speed DC motorswould probably work better, but I was looking for something quick and simple. Unlike propellor driven boats, this type of actuation allows for the use of the most simplest robot control algorithm -differential steering. Propellor driven boats are designed for speed and efficiency, but they dont perform so well in maneuverability. They also pose significant control algorithm challenges. Even more important, a paddle wheel boat can do 360's without ever moving forward.

Paddle Construction Just to emphasize once more, my paddle wheel design is not optimal and was based on artistic value over functionality. If you were to do a paddle wheel design, I suggest you based the design off a river boat paddle wheel, and place one wheel on each side of the boat as I did. A river boat paddle wheel design:

The method I used was to take a wooden dowel, bandsaw a slit into it, place pieces of balsa wood into the slit, then superglue the pieces in. The wooden dowel was then press fit into a block of foam core:

Control, Sensing, and Electronics The robot boat was based off of a simple remote control design, using a 6V NiMH 1800mAh battery, a reciever, and two HS-311 servos. I had additional plans to attach a scanning Sharp IR rangefinder with an additional servo for obstacle avoidance, a digital compass for directional navigation, and a microcontroller for control. Note that a digital compass works very well on open water, as there is nothing to disturb the magnetic field near the boat. But after doing initial tests in a lake, I found that it had serious trouble going against strong winds. I was also not willing to go through the trouble to put a GPS on it for localization. Although I didnt do autonomous control, I would have put a backup remote control system on, so I can switch it on whenever the autonomous robot boat got stuck somewhere I didnt want to swim to . . . This is a close-up image of the remote control system:

See that white square thing under the electronics? I used a 1/8" sheet of HDPE to screw mount my servos in place, and then velcroed the sheet to the unpainted foam core surface. The reciever and battery was velcroed in the same manner. I concluded that for navigating lakes the robot boat has a sub-optimal design. If I was to reattempt it, I would:

- use a shorter, wider, hull - design the hull after a river boat shape - optimize paddle wheels - use DC motors for highspeed paddling

CAD Design I did a quick design in CAD to make sure everything fit just right. Feel free to drag around the CAD file:

You may download the Thai Boat DWF here. Additional Images These are some additional photos I took:

How to Use This Algorithm Place two motorized wheels on your robot, one on either side. Send your move commands to the motors by either using a motordriver or H-bridge. Or if you are using servos, just send the required pulse width. Note that this algorithm works great with the photovore algorithm.

PHOTORESISTOR ALGORITHMS Photovore

Object Avoidance Photovore, Improved Split Brain Photophobe

Photovore The photovore is a robot that chases light, and is perhaps the simplest of all sensor algorithms. If you are a beginner, this should be your first algorithm. For this to work, your robot needs at least two light detecting sensors, typicall photoresistorsor IR emitter/detectors, out in front and spaced apart from each other. One on left side of your robot, the other located on the right side. In your code, have themicrocontroller read the analog value from both sensors. Then do a comparison - the sensor that reads more light is the direction your robot should turn. For example, if the left photoresistor reads more light than the right photoresistor, your robot should turn or tend towards the left. If both sensors read about the same value, meaning the both get the same amount of light, then your robot should drive straight.

pseudocode:
read left_photoresistor read right_photoresistor

if left_photoresistor detects more light than right_photoresistor then turn robot left if right_photoresistor detects more light than left_photoresistor then turn robot right if right_photoresistor detects about the same as left_photoresistor then robot goes straight loop If you havent discoverd this yet, the photovore algorithm works great with thedifferential drive algorithm. =)

Object Avoidance Photovore Mod

Using the same exact code, and same exact robot, you can do a small modification to your robot to give your photovore the ability to avoid objects. Bend your photoresistors so that they point downwards, and are close to the ground. Depending on the lighting, objects will all cast shadows onto the ground. Avoiding darkness, your photovore robot is naturally an object avoider. Of course if the lighting shines directly onto an object, or if you have dark floors with white walls, it might not work so well . . . But its easy and it will work. For more advanced object avoidance algorithms, check out my tutorial for the IR Rangefinder. If you are making a line following robot, make a photovore, point the photoresistors towards the ground, and space the photoresistors so that the distance is less than the width of the white line. You can use the exact same algorithm.

Photovore Algorithm, Improved This algorithm does the same as the original, but instead of case-based it works under a more advanced Fuzzy Logiccontrol algorithm. Your robot will no longer just have the three modes of turn left, turn right, and go forward. Instead will have commands like 'turn left by 10 degrees' or 'turn right really fast', and with no additional lines of code! pseudocode:
read left_photoresistor read right_photoresistor

left_motor = (left_photoresistor - right_photoresistor) * arbitrary_constant right_motor = (right_photoresistor - left_photoresistor) * arbitrary_constant loop

Photovore, Split Brain Approach This algorithm works without comparison of photoresistor values. Instead, just command the right motor based on light from the left sensor, and the left motor with only data from the right sensor.

You can also get interesting variations by reversing the sensors for a cross-brain algorithm. pseudocode:
read left_photoresistor read right_photoresistor

move left_wheel_speed = right_photoresistor * arbitrary_constant move right_wheel_speed = left_photoresistor * arbitrary_constant loop The human brain to an extent takes the split brain approach. Both halves of the brain, naturally connected by a nerve called the corpus callosum, share information between each other. But if you sever this nerve (as does happen although rarely for medical reasons), the brain mostly remains fully functional. People can lead normal lives, despite their brain being split in half! (seriously, Im not joking) This means the brain operates using the split brain approach, although on many occasions there are advantages to share information (hence the reason for the nerve).

Photophobe The photophobe robot is a robot that runs away from light instead of chases light. There are two ways you can do this. The first is simply to reverse the left and right

photoresistors, so that the left sensor is on the right side, and the right sensor is on the left side. With no changes of code, it will avoid light!

But if you want to do the code method, make this slight change: pseudocode:
read left_photoresistor read right_photoresistor

if left_photoresistor detects more light than right_photoresistor then turn robot right if right_photoresistor detects more light than left_photoresistor then turn robot left if right_photoresistor detects about the same as left_photoresistor then robot goes straight loop

pseudocode:
input sensor reading

make decision based on sensor reading do one of below actions: to drive straight both wheels move forward at same speed to drive reverse both wheels move back at same speed to turn left the left wheel moves in reverse and the right wheel moves forward

to turn right the right wheel moves in reverse and the left wheel moves forward Two robot soccer teams of custom differential drive robots:

PROGRAMMING - FUZZY LOGIC

Fuzzy Logic Background Computers define EVERYTHING in binary, a simple 0 or 1. Its either on, or off. But the world isnt black and white, and not everything is simply true or false. Fuzzy Logic (FL) is a numeric representation where the answer isn't just Yes or No, but a grey Maybe. It isn't where something is just very hot or very cold, but instead could be luke warm, slightly chilly, etc. Humans operate much better when fuzzily describing things, instead of simply using black and white for everything. If we describe a temperature we might say 73 F, describing a food we might say its a little tastey but could be better, or naming a color we might say dark green or light green. To not have that range of descriptions would really hamper our descriptive abilities. If robots are to ever out-perform humans, they need this ability too. Disadvantages of Binary Logic If a robot can only make decisions based on two extremes, its actions will also be two extremes. Its like saying a robot can only go two speeds, 1mph and 30mph, and no speed in between. If suppose your sensors give you values right between both extremes, your robot would become very jerky bouncing between both those speeds. This is referred to

as oscillation, or when sensor data quickly bounces between two different case-based (pre-defined) actions. case based example:
sensor data can range between 0 and 255 sensor value bounces between 127 and 128 (middle of range) 10 is a made up tweak constant

case based pseudocode:


if (0->127) go 1mph if (128->255) go 30mph

(note: On an 8 bit analog-to-digital converter, 127.5 is not possible) Case-baseed oscillation is very bad for your robot - jerky motions can quickly wear and break mechanical parts, and will waste huge amounts of energy in acceleration/deceleration. You could perhaps write a long cased-based list of sensor to speed conversions, but that would be painfully long, and will waste memory space and processing time.

This is where fuzzy logic comes in, as fuzzy control allows very smooth transitions between actions with less code.*See Below fuzzy logic example:
sensor data can range between 0 and 255 sensor value bounces between 127 and 128 (middle of range)

fuzzy based pseudocode:


speed = sensor/10

so on a scale of 0mph to 30mph: if say your sensor = 127, speed = 12.7 mph and if sensor = 128, speed = 12.8 mph (Note: So this above example isnt exactly fuzzy logic, but an oversimplified equation so that you can have a basis of understanding the later equations in this tutorial. This equation can be considered fuzzy logic if all added arbitrary weighted values equal 1, as explained later.) Not only is there a smooth transition between speeds, but the fuzzy program was much shorter and easier to write too! There are many other advantages. Advantages of Fuzzy Logic It is inherently robust since it does not require precise, noise-free inputs. This means you can fudge (mmmmm . . . fuuuddggee) your equations with additional arbitrary constants in your equation. New sensors can easily be incorporated into the system simply by generating appropriate governing equations. Often you can combine these equations into one. Any number of inputs can be processed and outputs generated. Fuzzy logic is not limited to a few feedback inputs and one or two control outputs, nor is it necessary to measure or compute rate-of-change parameters in order for it to be implemented. Any sensor data that provides some indication of a system's actions and reactions is sufficient. This allows the sensors to be inexpensive and imprecise thus keeping the overall system cost and complexity low. Fuzzy logic can control nonlinear systems that would be difficult or impossible to model as case-based. Fuzzy controllers are far simpler than knowledge-based systems. You do not need to be a controls expert, or fully understand the physics of your robot, to design a fuzzy logic control system. Just like in PID control, fuzzy logic can be "tweaked" once the system is operating in order to optimize performance. Generally, fuzzy logic is so forgiving that the system will probably work the first time without any tweaking. Fuzzy logic in most cases takes up significantly fewer lines of code in programming. Having memory issues? This may be your solution. A disadvantage . . . There are exceptions to where case based programming is simpler to implement than fuzzy logic, such as when your mechanism can only work at a set of fixed outputs. Two examples would be gear changers, and navigating to fixed locations of objects centered around arobot arm. The following algorithms would still be possible, just highly complex and therefore not recommended. You can also use both methods, just mix and match to take only advantages of both.

Defining Fuzzy Logic With a Microcontroller Most sensors used in hobby robotics operate on analog voltages, meaning that they output a fuzzy value between 0 and 1 (0V and 5V). This output then goes to your microcontroller's analog to digital (ADC)converter. The typical microcontroller uses an 8 bit ADC, which basically means it has a fuzzy range of 0 to 255 (0 being 0V and 255 being 5V). What exactly that 8 bit value means greatly depends on what your robot sensorsare and sense, so you would then need an equation to convert that 0-255 value into something more meaningful - a color of an object, a distance to a target, a sound amplitude, etc. You would then put that number into another equation (usually just multiplied or divided by some user defined tweak constant) to be applied to the robot actuators. As shown in the fuzzy logic example above, this process is fairly simple. Example of Fuzzy Logic A photovore robot would make a good example for fuzzy logic. Your robot reads in sensor values, plugs those values into an equation, then takes the new values and sends them to the motors. This conceptual robot has two photoresistors plugged into two ADC:
read left_sensor; read right_sensor;

left_motor_speed = right_sensor * arbitrary_constant; right_motor_speed = left_sensor * arbitrary_constant;

So basically, the more light a sensor gets, the faster the opposite side motor goes - making the bot turn into the light. Just 4 lines long! If/then statements are typically many more, and dont work so well.

The above example has a flaw though, in that if the robot is in a dark area neither motor will work. Another better example:

read left_sensor; read right_sensor;

left_motor_speed = (left_sensor - right_sensor) * arbitrary_constant; right_motor_speed = (right_sensor - left_sensor) * arbitrary_constant;

Still, only four lines of code, yet the robot can track any light at any brightness level. The above equation still has an oscillation problem when both sensors bounce near zero, and the robot wont move if both sensors are zero, but fiddling with the arbitrary constant and adding additional parts to the equation will solve all of that.

One last example, but one of many solutions:


left_motor_speed = (left_sensor - right_sensor) + arbitrary_constant/(right_sensor+1)^2; right_motor_speed = (right_sensor - left_sensor) + arbitrary_constant/(left_sensor+1)^2;

Mixing Digital and Analog Sensors Because fuzzy logic is great for sensor fusion, I should show you how to mix analog and digital sensors.**See BelowStarting with the simpler above photovore code, lets add something in front of our equations to representdigital collision sensors. This new code will allow the photovore to back away intelligently during wall collisions.
left_motor_speed = (left_bumper*(-100)) + (left_sensor - right_sensor) * arbitrary_constant; right_motor_speed = (right_bumper*(-100)) + (right_sensor - left_sensor) * arbitrary_constant;

Now if the bumper doesnt detect a wall (bumper = 0), then the algorithm acts like normal because the first term is zero. But if bumper=1 (wall collision), that first term becomes 100, meaning the motor will suddenly back up and away from the wall. You can also add a time delay in the code so the bot will back up for X amount of time, but for simplicity Id just add a resistor/capacitor on the sensor with X time constant. As you can see, no additional lines of code were added (other thanread the new sensors). Its also no longer pure analog, but now has digital components too. Hows that for sensor fusion? ;) Note that you could use other digital sensors, such as sonar or encoders, in a similar way. You only limitation is how mathematically clever you are . . . Negative Values in Fuzzy Logic Some robot sensors, such as torque cells, give both positive and negative outputs. Now you may be thinking, 'I can only use 0->255, what if I want to include negative values?' Well, your equation could scale your range into the negatives.

negative conversion example:


long signed int sensor_value; //variable type allows negative values in code

sensor_value=-255/2; //shifts everything into the negative, where 127 is 0 There are many other ways to do this that I wont get in to, but you get the idea. For further reading check out the programming variables tutorial.
PROGRAMMING - VARIABLES

C Variables Controlling variables in your program are very important. Unlike with programming computers (such as in C++ or Java) where you can call floats and long ints left and right, doing so on a microcontroller would cause serious problems. With microcontrollers you need to always be careful about limited memory, limited processing speeds, overflows, signs, and rounding. C Variable Reference Chart
definition short int char int unsigned int signed int long int unsigned long int signed long int float bit 2^x 1 -bit 8-bit 8-bit 8-bit 8-bit 16-bit 16-bit 16-bit 32-bit number span allowed 0, 1 (False, True) a-z, A-Z, 0-9 0 .. 255 0 .. 255 -128 .. 127 0 .. 65535 0 .. 65535 -32768 .. 32767 1.2 x 10^(-38) .. 3.4 x 10^(38)

Limited Memory Obviously the little microcontroller thats the size of a quarter on your robot isn't going to have the practically infinite memory of your PC. Although most microcontrollers today can be programmed without too much worry on memory limits, there are specific instances where it would be important. If your robot does mapping, for example, efficient use of memory is important. You always need to remember to use the variable type that

requires the least amount of memory yet still stores the information you need. For example, if a variable is only expected to store a number from 100 to 200, why use a long int when just an int would work? Also, the fewer bits that need to be processed, the faster the processing can occur. Limited Processing Speeds Your microcontroller is not a 2.5 GHz processor. Don't treat it like one. Chances are its a 4 MHz to 20 MHz processor. This means that if you write a mathematically complex equation, your microcontroller could take up to seconds to process it. By that time your robot might have collided into a cute squirrel without even knowing!!! With robots you generally want to process yoursensor data about 3 to 8 times per second, depending on the speed and environment of your robot. This means you should avoid all use of 16-bit and 32-bit variables at all costs. You should also avoid all use of exponents and trigonometry - both because they are software implemented and require heavy processing. What if your robot requires a complex equation and there is no way around it? Well, what you would do is take shortcuts. Use lookup tables for often made calculations, such as for trigonometry. To avoid floats, instead of 13/1.8 use 130/18 by multiplying both numbers by 10 before dividing. Or round your decimal places - speed is almost always more important than accuracy with robots. Be very careful with the order of operations in your equation, as certain orders retain higher accuracy than others. Don't even think about derivatives or integrals. Overflows An overflow is when a value for a variable exceeds the allowed number span. For example, an int for a microcontroller cannot exceed 255. If it does, it will loop back.
unsigned int variable = 255; variable = variable+1; //variable will now equal 0, not 256!!!

To avoid this overflow, you would have to change your variable type to something else, such as a long int. You might also be interested in reading about timers, as accounting for timer overflows is often important when using them. Signs Remember that signed variables can be either negative or positive but unsigned variables can only be positive. In reality you do not always need a negative number. A positive number can often suffice because you can always arbitralily define the symantics of a variable. For example, numbers between 0 and 128 can represent negatives, and numbers between 129 and 255 can represent positive numbers.

But there will often be times when you would perfer to use a negative number for intuitive reasons. For example, when I program a robot, I use negative numbers to represent a motor going in reverse, and a positive for a motor going forward. The main reason I would avoid using negative numbers is simply because a signed int overflows at 128 (or -128) while unsigned overflows at 256 (or 0). Extras For further reading of programming variables for robots, have a look at thefuzzy logic tutorial. Examples of Variables in C Code: Defining variables:
#define ANGLE_MAX 255//global constants must be defined first in C int answer; signed long int answer2; int variable = 3; signed long int constant = -538;

variable math examples (assume answer is reset after each example):


answer = variable * 2; //answer = 6

answer = variable / 2; //answer = 1 (because of rounding down) answer = variable + constant; //answer = 233 (because of overflows and signs) answer2 = signed long int(variable) + constant; //answer2 = -535 answer = variable - 4; //answer = 255 (because of overflow) answer = (variable + 1.2)/3; //answer = 1 (because of rounding) answer = variable/3 + 1.2/3; //answer = 1 (because of rounding and order of operations) answer = answer + variable; //answer = RANDOM GARBAGE (because answer is not defined)

What if your sensor returns positive AND negative voltage values? Then set your microcontroller ground to 0V and your sensor ground to 2.5V (by using a voltage regulator). This way anything below 2.5V represents a negative value, and anything above it represents a positive voltage. Make sure your sensor output values are between

0V to 5V or you may damage the ADC. To do this, carefully scale anon-inverting amplifier with the right resistor values and place between your sensor output and the ADC input. *Addendum on the Definition of Fuzzy Logic Many will argue that the above description is a very poor description of true fuzzy logic, and they are correct. Fuzzy logic was never developed or intended for use on robots. Its use was for describing things that didnt fit a binary description. As such, this is not true fuzzy logic. However . . . The point of this tutorial is to adapt fuzzy logic into a functional, practical, shortened control algorithm that you may use on your robot. All equations do fit the definition of fuzzy logic, despite being permutated and rewritten into more useful (yet non-standard) notation. **Addendum on Analog Computers Some may also argue that this is only an analog computer, of which they are partially wrong. Many of the examples I gave use analog sensors, and have analog outputs to analog motors. In those cases, yes, it could be considered an analog computer. But what about the case where I use an on/off switch? Or what about a case of using binary solenoids for output, instead of motors? This cannot be argued to be an analog computer in every case. ***Addendum on the Forum Still disagree that this is true Fuzzy Logic? Check out myforum post on fuzzy logic.
PROGRAMMING - LINE FOLLOWING

This page goes with tips to win MOBOT.


ROBOT COMPETITIONS - MOBOT

MOBOT is a competition that looks very easy on first look, but turns out to be much more difficult than participants ever expect. Anything specific about MOBOT, such as rules, past rankings, map of the course, and any other information can be found on their website, MOBOT. This page will be dedicated to helping you win instead. What is MOBOT? This competition is held every year during Carnival at Carnegie Mellon University. A quick note, it is open for anyone to participate, but placing and prize money are available to students and alumni only. Want to see more? View the official MOBOT competition videos, some unofficial MOBOT competition videos, and the most recent MOBOT 2008 competition videos. Also, read MOBOT news articles in The Tartan, and the 2008 races on c|net's news.com. Quick rules rundown. Design and build a fully autonomous robot (or train an non-sapien like animal such as a dog, yes I am serious, read the rules) which can follow a white line. Sound easy? They got kits out there that can do that, right? WRONG. One word, the competition is OUTSIDE. No more perfect idealized world of perfectly flat floors, high contrast lines, and controlled light settings. 50% of all competition entries cannot even get past the first gate (excluding the start gate, which by the way I have also seen an occasional MOBOT fail). So to help you win MOBOT, first I am going to explain where so many robots go wrong, and then I will explain how the winning robots solved these problems. I want to stress that you should solve each of these problems in this order. Only bother with the next problem after you have solved the previous. Trust me. If you are curious of my credentials, I have built 6 MOBOTS, competed with four (all named Pikachu), and placed each time.

The white line is contrasted on a grey concrete background. This is the first problem all contestants 'solve.' It is rare to find one that has not. But believe it or not, their algorithms and circuit are still often lacking and on competition day 1/4th of entries fail from this problem. When designing your line detection circuit/algorithm, test test test. Optimize for that perfect resistor value, or that perfect camera algorithm. You need to maximize your binary threshold as best as possible. Remember also that the concrete color changes several times down the course. This makes calibrating difficult, so you need to test on each type of concrete. I personally have found calibrating on the concrete used by the first slope to be optimal for theIR emitter/detector, but it might have been circuit specific. To help with calibration, write a

small algorithm I like to call auto-calibration. Basically when you turn on your robot, it reads a measurement obtained from the white line, and another measurement from the concrete. Have your algorithm find the average of those two values and set that to be yourbinary threshold value. You should do this for each IR sensor because your circuits will have small resistance variations. Auto-calibration, as opposed to manual calibration, is great for unexpected weather. Ever notice how concrete changes color when it is wet?

Sunlight can flood sensors to the point of being useless. Our sun emits large quantities of light in the infrared spectrum, and although the human eye is blind to it, electronics are not. For obvious reasons theIR emitter/detector circuit is quite sensitive to even the smallest amount of sunlight. So it is important to develop a shield that blocks the light. There have been many variations of this: Cereal boxes, garbage bags, boxes, electrical tape, black felt cloth (which does not work well), and duct tape to name a few. Remember, the shield must always be flush with the ground, so it has to be flexible too. Do not forget to test your shield - take measurements inside andoutside, and if you notice measurement variation, your shield is not good enough.

Hills are a huge control problem. 80% of all MOBOT's that can follow the white line lose it coming down the very first hill. There are two reasons for this. The first is speed control. The MOBOT goes down the hill at full speed and cannot make the sudden sharp left turn at the bottom. This means you need a motor strong enough to stop your robot, sensors that can update as fast as your robot moves, and a feedback control system to detect when you are going too fast. Sound too complicated? Modified servos WILL ALWAYS work for this problem. But they are too slow to break any speed records. Your other option is to use aDC motor braking technique. Good luck. This is a challenge which most contestants fail at, or think they can skip.

The other problem going down hills is that as the slope changes, the angle that your sensors see the white line changes. This is a huge problem for IR as a change in distance from the ground can yield significant sensor reading differences. A solution to this would be to put the IR on a separate suspension system to always stay level with the ground. Solving this problem will also solve the problem withcracks in the concrete. Speaking of suspension systems, I have occasionally seen MOBOT's bottom at at the bottom of the hill because they place IR in the front of their robot. If you decide to do this, you probably need suspension on your robot. As additional comments, I have not seen any robot successfully detect the hill, although it is possible. Mercury switches andaccelerometers have been tried, but often vibration is too great and hence accidently triggers the sensors too often. Interestingly, counting hills can be useful for knowing when to turn on your decision point algorithm. The sensors get a poor view of the course. This goes for both camera vision and the more traditional IR. Camera placement is very important. Not too high or too close, not too angled either. Test, don't guess! Also, if your camera refresh rate is slow, have your camera look slightly ahead. This way it can 'see into the future' and technically predict when to make a change before it is too late. For any sensor you must also consider the center of rotation on your robot. Put your sensors too far away, and when your robot turns the sensors cannot keep up with the sudden and rapid change. For IR, you must also make a very important design decision: the placement and number of IR in your array. Optimally you want as many as possible - a high resolution is why camera vision usually outperforms IR. But your algorithm gets a little slower and more complicated with each additional IR added. Plus you need to wire more circuits, and calibrate more through testing. I recommend to opt for a high resolution of 4-8 IR, but technically it is possible to do the entire course with just one. I actually have done the entire course with just two, but I did it out of simplicity and laziness, not optimality. I have posted a fewMOBOT algorithms to help.
Decision points. Don't bother with it. At least not until you have already designed a robot that can consistantly do the rest of the course. Too many contestants try to solve the decision point problem before they figure out how to survive the first hill. If you can do the rest of the course, the decision points are fairly easy. I have seen encoders used; electrical, mechanical, and Sharp IR rangefinder gate counters; and both camera vision and IR for

detecting the line splits. Remember, if you can get to the decision points, you probably have 1st or 2nd place already. Fertilizer dust can jam IR. My last MOBOT is probably the first and only in MOBOT history to encounter this problem. A week before the competition a layer of fertilizer dust was sprayed on the grass near the course. Due to a lack of humidity the dust quickly covered the course. It was in small enough quantities not to affect readings, however as my robot traveled slowly the dust collected on the sensors itself.

Within 15 feet my black IR had a thick grey layer of dust preventing any more line following. I believe it to be statically charged dust, how else would it stick? My desperate solution, which also got me judge's choice that year, was to place a high powered fan on front to blow the dust away. The robot would consistantly go 3x further, but dust was still too bad. Others have also suggested using air canisters to blow air on the sensors. A month later I retried the course and made it to the decision points with little dust issues. Working Examples I've documented my previous MOBOTS and included video of each following a white line: 2003: Pikachu (video only) 2004: Taurus 2 2005: Fuzzy the Omni-Wheel Robot 2006: Pikachu IV 2007: Line Following Robot 2008: Experimental Robot Platform Final Comments Most contestents fail from just a lack of testing and preparation, often waiting until 3 weeks before the competition to start. If you do this do not expect it to work. MOBOT can be solved, but only given enough time. So good luck, start early, and hopefully my guide will save you from mistakes.

If you have ever competed in MOBOT and would like to add your two cents, message me! Many of you have had interesting code/algorithms you should share.

note: this page is a place holder until a better tutorial is written Autoconfig This algorithm allows your robot to determine its own binary threshold for detecting the line. How to Use This Algorithm Set up your IR emitter/detector sensors and place all of them facing the concrete. Get a reading, then place all of them on the white line and get another reading. Now you have two readings for each IR. Find the average of each, and set them as the threshold for each. Pseudocode:
read IR1c //read on concrete read IR2c read IR1w //read on white line read IR2w

(IR1c + IR1w)/2 = IR1thresh (IR2c + IR2w)/2 = IR2thresh Infrared line detection with 2 IR This algorithm allows your robot to follow a white line.

How to Use This Algorithm Set up your two IR emitter/detector sensors so that one is on the left of the line and the other on the right. As long as neither detects the white line it means your robot is following the line. If either detects the line, the robot must make course corrections. Pseudocode:

read leftIR read rightIR

if leftIR detects line then turn robot left until not true if rightIR detects line then turn robot right until not true if neither IR detects line then robot goes straight loop
HOW TO TIME YOUR MICROCONTROLLER (WITHOUT A HARDWARE TIMER)

Timing your Microcontroller Suppose you wanted to have your microcontroller pass some time in your software but without using a hardware timer? Hardware timers can cause problems, and every microcontroller has a limited number of them, so there are many reasons why this could be useful. I'm not saying hardware timers are bad, and in some cases its better to use them, but there are other ways to do timing that I'd like to explain. For those who are familiar with my $50 Robot, you have probably noticed the delay_cycles function, and that its used for things such as for theservos. In this tutorial I will explain how I calculated such delay cycles and what they actually mean. Cycles A 'cycle' is the smallest amount of time it takes for your microcontroller to do 'nothing.' For example, suppose I ran this while loop on a microcontroller:

cycles=8; void delay_cycles(unsigned long int cycles) { while(cycles > 0) cycles--; }

Although that while loop is effectively doing nothing, its serving another purpose of passing time. In this case 8 cycles, representing some unknown time, will pass. How long does a cycle take? Well this depends on many things. It depends on the microcontroller hardware, the frequency of the attached crystal, software settings, input voltage, fuses, etc. How do you calculate it? There are several ways, but this tutorial will demonstrate a cheap and easy way to do it experimentally. If you have an oscilloscope, you can measure exactly how long a cycle is by measuring the frequency of an output pin. Simply put this code on your microcontroller:
loop: make digital port high delay_cycles(10); make digital port low delay_cycles(10);

But what if you don't have a $2k oscilloscope? Well, there is another way . . . Attach an LED + resistor to that digital port. Then make the delay some really long time, something like this:
loop: make digital port high delay_cycles(65500) x 10; make digital port low delay_cycles(65500) x 10;

What should now happen is your LED will turn on, wait a bit, then turn off, and wait a bit. Now this means that 65500*10*2 cycles occur for every flash of the LED. Next, get out your watch or Windows clock and count the number of times your LED turns on in one minute.

Lets say you counted it flash 30 times. I will call that number 'count'. That means: 65500*10*2*count/(60 seconds)=cycles per second or if we solve that equation for our example, 655000 cycles/second But what you want is to know how many cycles it takes for a certain time period to pass, no? For example, if you wanted to control servos, you would need a square wave of 1.5ms high and 20ms low. How many cycles is 1.5ms or 20ms? calculating: 655000 cycles/second -> 655 cycles/ms 655 cycles/ms * 1.5ms = 982.5 cycles ~= 982 cycles So to get your servo to stop moving, you'd want to send a signal of 1.5ms long, or 982:
turn servo on delay_cycles(982); turn servo off

Using the same equation for 1ms and 2ms, the extremes of servo motion, we calculate some more: 655 cycles/ms * 1ms = 655 cycles 655 cycles/ms * 2ms = 1310 cycles

The Final Equation Now suppose you wanted to create a delay of 5 seconds. Perhaps you wanted your robot to wait for some reason . . . Well, this is the equation you would use. cycles = (calculated cycles per second) * (time you want to pass) 655 cycles/ms * 5000 ms = 3275000 cycles But you aren't quite done, as a long int can only store up to 65536! Doing a delay_cycles(3275000) will not work! So calculate this: 3275000/65536 = 50 times and then program this:
loop 50 times: delay_cycles(65535);

to get your 5 seconds delay. And there you have it, a method to time your microcontroller experimentally!
PROGRAMMING - PID CONTROL

PID Control A proportional integral derivative controller (PID controller) is a common method of

controlling robots. PID theory will help you design a better control equation for your robot. Shown here is the basic closed-loop (a complete cycle) control diagram:

The point of a control system is to get your robot actuators (or anything really) to do what you want without . . . ummmm . . . going out of control. The sensor (usually anencoder on the actuator) will determine what is changing, the program you write defines what the final result should be, and the actuator actually makes the change. Another sensor could sense the environment, giving the robot a higher-level sense of where to go. Terminology To get you started, here are a few terms you will need to know: error - The error is the amount at which your device isnt doing something right. For example, if your robot is going 3mph but you want it to go 2mph, the error is 3mph-2mph = 1mph. Or suppose your robot is located at x=5 but you want it at x=7, then the error is 2. A control system cannot do anything if there is no error - think about it, if your robot is doing what you want, it wouldnt need control! proportional (P) - The proportional term is typically the error. This is usually the distance you want the robot to travel, or perhaps a temperature you want something to be at. The robot is at position A, but wants to be at B, so the P term is A - B. derivative (D) - The derivative term is the change in error made over a set time period (t). For example, the error was C before and now its D, and t time has passed, then the derivative term is (C-D)/t. Use the timer on your microcontrollerto determine the time passed (see timer tutorial).

integral (I) - The integral term is the accumulative error made over a set period of time (t). For example, your robot continually is on average off by a certain amount all the time, the I term will catch it. Lets say at t1 the error was A, at t2 it was B, and at t3 it was C. The integral term would be A/t1 + B/t2 + C/t3. tweak constant (gain) - Each term (P, I, D) will need to be tweaked in your code. There are many things about a robot that is very difficult to model mathematically (ground friction, motor inductance, center of mass, ducktape holding your robot together, etc.). So often times it is better to just build the robot, implement a control equation, then tweak the equation until it works properly. A tweak constant is just a guessed number that you multiple each term with. For example, Kd is the derivative constant. Idealy you want the tweak constant high enough that your settling time is minimal but low enough so that there is no overshoot. P*Kp + I*Ki + D*Kd

What you see in this image is typically what will happen with your PID robot. It will start with some error and the actuator output will change until the error goes away (near the final value). The time it takes for this to happen is called the settling time. Shorter settling times are almost always better. Often times you might not design the system properly and the system will change so fast that it overshoots (bad!), causing some oscillation until the system settles. And there will usually be some error band. The error band is dependent on how fine a control your design is capable of - you will have to program your robot to ignore error within the error band or it will probably oscillate. There will always be an error band, no matter how advanced the system. ignoring acceptable error band example:
if error <= .000001 //subjectively determined acceptable then error = 0; //ignore it

The Complete PID Equation Combining everything from above, here is the complete PID equation: Actuator_Output = Kp*P + Ki*I + Kd*D or in easy to understand terms: Actuator_Output =
tweakA * (distance from goal) + tweakB * (change in error) + tweakC * (accumulative error)

Simplifications The nice thing about tuning a PID controller is that you don't need to have a good understanding of formal control theory to do a fairly good job of it. Most control situations will work with just an hour or so max of tuning. Better yet, rarely will you need the integral term. Thats right, just delete and ignore it! The only time you will need this term is when acceleration plays a big factor with your robot. If your robot is really heavy, or gravity is not on it's side (such as steep hills), then you will need the integral term. But out of all the robots I have ever programmed, only two needed an integral term - and both robots were over 30 lbs with a requirement for extremely high precision (millimeter or less error band). Control without the integral term is commonly referred to as simply PD control. There are also times when you do not require a derivative term, but usually only when the device mechanical stabalizes itself, works at very low speeds so that overshoot just doesnt happen, or you simply dont require good precision. Sampling Rate Issues The sampling rate is the speed at which your control algorithm can update itself. The faster the sampling rate, the higher precision control your robot will have. Slower sampling rates will result in higher settling times and an increased chance of overshoot (bad). To increase sampling rate, you want an even faster update of sensor readings, and minimal delay in your program loop. Its good to have the robot react to a changing environment before it drives off the table, anyway. Humans suffer from the sampling rate issue too (apparently drinking reduces the sampling rate, who would have guessed?).

The rule of thumb is that the sample time should be between 1/10th and 1/100th of the desired system settling time. For a typical homemade robot you want a sampling rate of about 20+/second (very reasonable with today's microcontrollers).
PHOTORESISTOR ALGORITHMS Photovore

Object Avoidance Photovore, Improved Split Brain Photophobe

Photovore The photovore is a robot that chases light, and is perhaps the simplest of all sensor algorithms. If you are a beginner, this should be your first algorithm. For this to work, your robot needs at least two light detecting sensors, typicall photoresistorsor IR emitter/detectors, out in front and spaced apart from each other. One on left side of your robot, the other located on the right side. In your code, have themicrocontroller read the analog value from both sensors. Then do a comparison - the sensor that reads more light is the direction your robot should turn. For example, if the left photoresistor reads more light than the right photoresistor, your robot

should turn or tend towards the left. If both sensors read about the same value, meaning the both get the same amount of light, then your robot should drive straight.

pseudocode:
read left_photoresistor read right_photoresistor

if left_photoresistor detects more light than right_photoresistor then turn robot left if right_photoresistor detects more light than left_photoresistor then turn robot right if right_photoresistor detects about the same as left_photoresistor then robot goes straight loop If you havent discoverd this yet, the photovore algorithm works great with thedifferential drive algorithm. =)

Object Avoidance Photovore Mod Using the same exact code, and same exact robot, you can do a small modification to your robot to give your photovore the ability to avoid objects. Bend your photoresistors so that they point downwards, and are close to the ground. Depending on the lighting, objects will all cast shadows onto the ground. Avoiding darkness, your photovore robot is naturally an object avoider. Of course if the lighting shines directly onto an object, or if you have dark floors with white walls, it might not work so well . . . But its easy and it will work. For more advanced object avoidance algorithms, check out my tutorial for the IR Rangefinder. If you are making a line following robot, make a photovore, point the photoresistors towards the ground, and space the photoresistors so that the distance is less than the width of the white line. You can use the exact same algorithm.

Photovore Algorithm, Improved This algorithm does the same as the original, but instead of case-based it works under a more advanced Fuzzy Logiccontrol algorithm. Your robot will no longer just have the three modes of turn left, turn right, and go forward. Instead will have commands like 'turn left by 10 degrees' or 'turn right really fast', and with no additional lines of code! pseudocode:
read left_photoresistor read right_photoresistor

left_motor = (left_photoresistor - right_photoresistor) * arbitrary_constant right_motor = (right_photoresistor - left_photoresistor) * arbitrary_constant loop

Photovore, Split Brain Approach This algorithm works without comparison of photoresistor values. Instead, just command the right motor based on light from the left sensor, and the left motor with only data from the right sensor.

You can also get interesting variations by reversing the sensors for a cross-brain algorithm. pseudocode:
read left_photoresistor read right_photoresistor

move left_wheel_speed = right_photoresistor * arbitrary_constant move right_wheel_speed = left_photoresistor * arbitrary_constant loop The human brain to an extent takes the split brain approach. Both halves of the brain, naturally connected by a nerve called the corpus callosum, share information between each other. But if you sever this nerve (as does happen although rarely for medical reasons), the brain mostly remains fully functional. People can lead normal lives, despite their brain being split in half! (seriously, Im not joking) This means the brain operates using the split brain approach, although on many occasions there are advantages to share information (hence the reason for the nerve).

Photophobe The photophobe robot is a robot that runs away from light instead of chases light. There are two ways you can do this. The first is simply to reverse the left and right

photoresistors, so that the left sensor is on the right side, and the right sensor is on the left side. With no changes of code, it will avoid light!

But if you want to do the code method, make this slight change: pseudocode:
read left_photoresistor read right_photoresistor

if left_photoresistor detects more light than right_photoresistor then turn robot right if right_photoresistor detects more light than left_photoresistor then turn robot left if right_photoresistor detects about the same as left_photoresistor then robot goes straight loop
PROGRAMMING - PRINTF()

Printing Out Data Printing out data from a microcontroller is extremely useful in robotics. You can use it for debugging your program, for use as a data logger, or to have your robot simply communicate with someone or something else. This short tutorial will give you sample code, and explain what everything means.

printf() The most convenient way of writing to a computer through serial or to a LCD is to use the formatted print utility printf(). If you have programmed in C++ before, this is the equivalent of cout. Unlike most C functions this function takes a variable number of parameters. The first two parameters are always the output channel and the formatting string; these may then be followed by variables or values of any type. The % character is used within the string to indicate a variable value is to be formatted and output. Variables That Can Follow The % Character When you output a variable, you must also define what variable type is being outputed. This is very important, as for example a variable printed out as a signed long int will often not print out the same as say the same variable printed out as an unsigned int.
c u x X d lu lx LX ld e f Character Unsigned int (decimal) Unsigned int (hex - lower case) Unsigned int (hex - upper case) Signed int (decimal) Unsigned long int (decimal) Unsigned long int (hex - lower case) Unsigned long int (hex - upper case) Signed long int (decimal) Float (scientific) Float (decimal)

End Of Line Control Characters Sometimes you would like to control the spacing and positioning of text that is printed out. To do this, you would add one of the commands below. I recommend just putting \n\r at the end of all printf() commands.
\n \r \b \' \" \\ \t \v go to new line carraige reset backspace single quote double quote backslash horizontal tab vertical tab

Examples of printf():

printf("hello, world!");

printf("squirels are cute\n\rpikachu is cuter\n\r"); printf("the answer is %u", answer); printf("the answer is %u and %f", answer, float(answer)); printf("3 + 4 = %u", (3+4)); printf("%f, %u\n\r", (10/3), answer); printf("the sensor value is %lu", analog(1));
PROGRAMMING - ROBOT SIMULATION

What a Robot Simulator is NOT Everyone wants a cool 'robot simulator' where a 3D animated robot runs around a 3D course. But actually, that is not what a simulator is or what it is for. That is simply a 3D rendering or animation with no purpose other than for visual demonstration. So then if not for 3D visualization, what is it for? Simulation tests a theory! Simulation is used as a quicker and/or simpler method for testing out ideas, theories, and software with robots. If you don't know what you are testing, then you don't need a simulator. Following are a few examples.

Simulation is for Testing Ideas Lets say you haven't built a robot yet, but want to test out an idea without spending tons of time and money building the robot to do it. With a simulator you can have an 'instant' robot to work with. This is great for those who prefer to program over working with hardware. Simulation to Test Software Lets say you wrote up this really neat pathfinder algorithm and you want to test the code for not just bugs, but also on effectiveness of the algorithm for a robot. Well you can test it on an actual robot, but doing so is a long and tedious process. First you turn on the bot, program it, place it in the start position, let it run, then record what happened. You typically need to do that several times to find and average. Given the results, you go back and restart that whole process a few dozen times, tweaking it all the time. Each individual run can take minutes to do, not including issues with recharging batteries and hardware failures, resulting in hours or days of lost time. With a simulator, you can run dozens of tests within just minutes. A perfect example was when I was writing mywavefront robot pathfinder algorithm. From experience I knew it would be incredibly difficult and time consuming to work out all the bugs on the actual hardware. So instead I wrote up a quick simulator to debug the algorithm. It probably saved me a week of coding and testing! See the below image of the simulation. Now you may be asking, where is the pretty 3D animations of a robot running around? As I said before, simulations aren't for display, they exist for functional reasons. For my wavefront test, letters and numbers in a command prompt was much more efficient/effective than a graphic display. You can find more information and simulation source code in my wavefront tutorial.

Simulation to Develop Theories Lets say you are designing a robot, but have no idea what features are important. For example, you are designing a four-wheeled robot with a mass for 5kg, what is the optimal spacing between the wheels? 5 inches? 15 inches? How would you develop an equation to determine this? You could just guess, but what if that value is actually very important for the performance of your robot? Or maybe instead of guessing you could build ~10 robots, each with a different spacing, to find the best one. But a better way would be simulation. For this particular example, what you would do is put the robot in a physics simulator. You run about 10 simulations where the spacing is different and the terrain is different, and then identify patterns to select the best spacing. Its basically a scientific experiment done in software. When I was designing my Carpet Monkey robot there was the question about tail length. Robot engineers have in the last ~10 years or so found that adding a tail to climbing robots greatly enhances their climbing ability, but no one quite knew quantitatively why. An even harder question is, how long should that tail be? Well my Carpet Monkey definitely needed a tail to function, but I didn't want to build a dozen different tail lengths to see which was best. So I ran a physics simulation on it using Phun 2D: To my knowledge this is the first quantitative analysis of robot tails ever done. Tail lengths on the many other climbing robots were guessed based on subjective experiments. Feel free to download myCarpet Monkey Phun file. Here is a method totransfer a robot design to Phun. Also, a dual claw version Phun file. Thankslucidliquid! Other simulations include computational fluid dynamics (CFD), stress analysis, analog circuit analysis, dynamics testing, control stability analysis, pathfinding, FEA, etc. Why can't I have pretty 3D graphics? There actually are a few circumstances where this is useful. If you wanted to demonstrate your robot concept to non-technical people that can potentially give you lots of money, then you'd want to *animate* your robot. 3D graphics are also very helpful as CAD tools, something I like to use when designing complicated parts and assemblies. Doing it all on paper is a pain . . .

Disadvantages of Simulation There are many disadvantages to simulating a robot. This is because simulations only simulate what you tell it to simulate. (Yea, a no-brainer, but you'd be surprised . . .) If you forget something, say friction, or make an error, such as the wrong units for gravity, it will give a false result. Most simulations are very simplified: no sensor noise, no friction, no computational lag in the robot, all objects are perfect cubes, water is incompressible, etc. Just because your robot will work in the simulation doesn't mean it will work in real life! Of course you can spend lots of time making your simulation perfect, but if you require 100% realistic settings its probably better to just test with a real robot. Before designing your simulation, you need to decide what you can simplify and what needs to remain realistic. Simulation sometimes takes a really long time. A single CFD test run for my robot fishtakes between 1 to 2 days on an expensive high-speed blade server. Testing the actual real life robot fish takes like 5 minutes. So why use the CFD? Because I don't need a robot fish already built to test it! Simulations are often wrong. They are wrong because the experimenter makes a mistake, or isn't sure what features are most important and hence oversimplifies - common for new experimental theories. Often you need to compliment the simulation with real life experiments for comparison to make sure the simulation is accurate. I once had a

simulation giving a force result 100x larger than the real life experiment - turned out there was a math error in it. If a scientist reports some new result he found in a simulation, but offers no real life evidence/experiments, you can quickly call him out. He has no proof that his simulations aren't flawed. I had this flawed simulation problem when designing my single part suspension system wheels for my ERP. Of course in simulation it worked perfectly. But in reality I didn't expect bending creep to appear days or weeks after using the wheel. It took me several redesigns and *experimental testing* to fix the problem: As such, simulation should be used as a complimentary tool, but isn't an end-all solution. Where do I get simulation software? Check out my list of robot simulation software. You might also find some in the below ads if Google is smart enough:
ROBOT PARTS LIST - SOFTWARE PCB Design ARM

Electronics Simulation Robot Simulation Computer Vision Miscellaneous AVR

PIC CAD FEA CFD CAM/CNC

Check the ads here for potential robotics software, too. For more info, to offer your thoughts on some particular software, or to add some software to the list, check out this robot software forum post.

PCB Design Software

gEDA - Freeware, full GPL'd suite of Electronic Design Automation tools. These tools are used for electrical circuit design, schematic capture, simulation, prototyping, and production. Currently, the gEDA project offers a mature suite of free software applications for electronics design, including schematic capture, attribute management, bill of materials (BOM) generation, netlisting into over 20 netlist formats, analog and digital simulation, and printed circuit board (PCB) layout.

Eagle CAD - Freeware for non-profit uses, the most popular PCB editing software although definitely has a sharp learning curve. Eagle3D - Freeware, an extension to Eagle CAD to view your circuits in 3D. KiCAD - Freeware, for the creation of electronic schematic diagrams and printed circuit board artwork. Has 3D circuit viewing capabilities. Altium - Used to be called Protel. Easy-PC - It has a built in Spice simulator, too. PADS - PCB design software. NI Ultiboard - National Instruments PCB design software. PCB Trace Width Calculator - Freeware Javascript useful to determine if your circuit traces are thick enough to not fry.

Electronics Simulation Software

Proteus - PCB design and simulation software, can also simulation MCU's. OrCad - PCB design and it's simulation software, PSPICE. Falstad - Freeware Java applet circuit simulator (no software to install). VMLAB - Freeware tool for AVR develoment. VMlab can be integrated with WinAVR, also it supports digital and analog simulation and debugging. Really, it is the "lightweight Proteus". NI Multisim - National Instruments electronics simulation software. TI SPICE - Freeware Texas Instruments SPICE-based analog circuit simulation. TINA - TI's electronics simulation software, with MCU capabilities. Qucs - Freeware, an integrated circuit simulator which means you are able to setup a circuit with a graphical user interface (GUI) and simulate the large-signal, small-signal and noise behaviour of the circuit. After that simulation has finished you can view the simulation results on a presentation page or window.

Robot Simulation Software

AUV Workbench - Freeware, simulation software designed for expensive AUV platforms, but easy to use with nice graphics. Developed by NPS Center for AUV Research, Naval Postgraduate School. Player Stage - Freeware, robot simulation software, but not user friendly. Microsoft Robotics Studio Visual Simulation Environment - Less than user friendly robot simulation software bought from AEGIS. PHUN - 2D Physics Sandbox - Freeware, a 2D physics simulator that is *very* simple to learn and can even simulate water. 3D Graphic Robot Simulation - Graphic java-based 3D robot arm simulator DMOZ Open Directory list - list of robot simulators Sinbad - Java 3d robot simulator for scientific and educationnal purposes. It is mainly dedicated to researchers/programmers who want a simple basis for studying Situated Artificial Intelligence, Machine Learning, and more generally AI algorithms, in the context of Autonomous Robotics and Autonomous Agents. Webots - very expensive professional robot simulation software

Computer Vision Software

RoboRealm - Its simple GUI interface allows you to do histograms, edge detection, filtering, blob detection, matching, feature tracking, thresholding, transforms and morphs, coloring, and a few others. OpenCV - a popular open source collection of computer vision algorithms. Anintro to OpenCV is also available. Computer Vision Source Code - a collection of computer vision software links

Miscellaneous

MatLab - has tons of packages for anything from simulation to computer vision.

SciLab - open source alternative to MatLab. Electronics Assistant - basic resistor and frequency calculator MSC Adams - dynamics simulation Microsoft Movie Maker - free and already installed with every copy of Windows, this software is great for making videos of your robots. I used this to produce all my robot videos prior to 2009. Sony Vegas Movie Studio 9 Platinum Pro Pack- This is what I used to produce all my robot videos starting 2009. WMM above had a lot of limitations with sound and special effects. other movie making software - a review of the top movie making software. Make sure it allows at least three audio streams (movie audio, added music, voice over, etc.). ViewMaster - Gerber viewing and editing software. Look in the free downloads section for ViewMate, a freeware gerber viewer. Requires registration. Microsoft Visual Studio - free edition, includes Visual Basic

AVR Software

AVRStudio - the official IDE for programming AVR's WinAVR - tons of open source AVR software all in one install package Procyon AVRlib - open source software for AVRs PonyProg - freeware serial programmer software for AVRs ImageCraft ICC - AVR C compiler vAVRdisasm - freeware AVR Disassembler WebbotLib - the easiest and best documented open source library for AVR devices (recommended) a really large list of more C for AVR links

ARM Software

Procyon ARMlib - open source C library for ARM processors WinARM - collection of ARM based software

PIC Software

PICC CCSC MPLAB IDE

CAD Software

Google Sketchup - Freeware (with limitations) CAD software. For more info, see Google Sketchup Info. Also check out Airman00's robot CAD parts collection. Autodesk Inventor - popular 3D CAD software, this is what I use to design my robots. Solid Edge - 3D CAD software. Solidworks - A popular 3D CAD software. Rhino 3D - 3D CAD software. Alibre Design Xpress - has limited freeware version. Blender - Blender is the free open source 3D content creation suite, available for all major operating systems under the GNU General Public License. BobCAD-CAM - Both 3D CAD and CAM software.

FEA Software

To learn more about FEA and CFD, please read my FEA and CFD tutorial. ANSYS - All ANSYS FEA products.

Solidworks Simulation Professional - Both thermal and stress analsys tools. COMSOL - Multiphysics software, with neat capabilities. Open Source FEA Software List - a bunch of free FEA software, but with little to no documentation. FreeFem++ - Freeware FEM software.

CFD Software

FEMAP - Industry software with thermal and flow solvers. Fluent - Flow solver now owned by ANSYS. ANSYS CFX - The ANSYS flow solver. Solidworks Flow - Solidworks fluids solver. OpenCFD - Freeware fluids solver. CFdesign - Thermal fluids solver for use with AutoDesk. There are a few CFD software suggestions on the forum: 188 and 2284.

CNC CAM Software

EdgeCAM Mastercam Mach3 and LazyCam Dolphin CAD/CAM FlashCut CNC Vectric

LinuxCNC JalaCNC - Freeware, plots CNC g-code in 2D. KCam4 - limited shareware, plots g-code in 3D. BobCAD-CAM - Both 3D CAD and CAM software.

PROGRAMMING - TIMERS

Timers for Microcontrollers The timer function is one of the basic features of a microcontroller. Although some compilers provide simple macros that implement delay routines, in order to determine time elapsed and to maximize use of the timer, understanding the timer functionality is necessary. This example will be done using the PIC16F877 microcontroller in C. To introduce delays in an application, the CCSC macro delay_ms() and delay_us() can be used. These macros provide an ability to block the MCU until the specified delay has elapsed. But what if you instead want to determine elapsed time for say a PID controller, or adata logger? For tasks that require the ability to measure time, it is possible to write code that uses the microcontroller timers. The Timer Different microcontrollers have different numbers and types of timers (Timer0, Timer1, Timer2, watchdog timer, etc.). Check the data sheet for the microcontroller you are using for specific details.

These timers are essentially counters that increment based on the clock cycle and the timer prescaler. An application can monitor these counters to determine how much time has elapsed. On the PIC16F877, Timer0 and Timer2 are 8-bit counters whereas Timer1 is a 16-bit counter. Individual timer counters can be set to an arbitrary value using the CCSC macro set_timer0, set_timer1, or set_timer2. When the counter reaches its limit (255 for 8-bit and 65535 for 16-bit counters), it overflows and wraps around to 0. Interrupts can be generated when wrap around occurs, allowing you to count these resets or initiate a timed event. Timer1 is normally used for PWM or capture and compare functions. Each timer can be configured with a different source (internal or external) and a prescaler. The prescaler determines the timer granularity (resolution). A timer with a prescaler of 1 increments its counter every 4 clock cycles - 1,000,000 times a second if using a 4 MHz clock. A timer with a prescaler of 8 increments its counter every 32 clock cycles. It is recommended to use the highest prescaler possible with your application. Calculating Time Passed The equation to determine the time passed after counting the number of ticks would be:
delay (in ms) = (# ticks) * 4 * prescaler * 1000 / (clock frequency)

for example . . . Assume that Timer1 is set up with a prescaler of 8 on a MCU clocked at 20 MHz. Assume that a total of 6250 clicks were counted. then . . .
delay (in ms) = (# ticks) * 4 * 8 * 1000 / (20000000)

delay (in ms) = (6250) / 625 = 10 ms Code in C First you must initialize the timer:
long delay;

setup_timer_0(T0_INTERNAL | T0_DIV_BY_8); //Set Timer0 prescaler to 8 now put this code in your main loop:
set_timer0(0); //reset timer to zero where needed

printf("I eat bugs for breakfast."); //do something that takes time

//calculate elapsed time in ms, use it for something like PID delay = get_timer0() / 625; //or print out data and put a time stamp on it for data logging printf("%u, %u, %lu\r\n", analog(PIN_A0), analog(PIN_A1), get_timer0()); Note that it is very important that you do not call the get_timer0() command until exactly when it is needed. In the above example I call the timer in my printf()statement - exactly when I need it. Timer Overflow You should also be careful that the timer never overflows in your loop or the timer will be wrong. If you expect it to overflow, you could call a timer overflow interrupt that counts the number of overflows - each overflow being a known set of time depending on your prescaler. In CCSC, interrupt service routines are functions that are preceded with #int_xxx. For instance, a Timer1 interrupt service routine would be declared as follows:
#int_timer1

//timer1 has overflowed void timer1_interrupt()


{ //do something quickly here //maybe count the interrupt //or perform some task //good practice not stay in interrupt too long }

To enable interrupts, the global interrupt bit must be set and then the specific interrupt bits must be set. For instance, to enable Timer0 interrupts, one would program the following lines right after the timer is initialized:
enable_interrupts(GLOBAL); enable_interrupts(INT_TIMER0);

If you want to stop the application from processing interrupts, you can disable the interrupts using the disable_interrupts(INT_TIMER0) CCSC macro. You can either disable a specific interrupt or all interrupts using the GLOBAL define. Timer Delay Here is another code sample that shows how to create a delay of 50 ms before resuming execution (alternative to delay_ms):

setup_timer_0(T0_INTERNAL | T0_DIV_BY_8); //Set Timer0 prescaler to 8

set_timer0(0); //reset timer while (get_timer0() < 3125); // wait for 50ms
PROGRAMMING - TRIGONOMETRY LOOKUP TABLE

Trigonometric Lookup Tables were used quite often way back when computers were only in labs and not everyday in people's homes. Back then computers were slow, so slow that it would take some significant amount of time to solve say, cos(40). And processors were unable to do it through hardware too. So if you needed to solve trigonometry functions real time, you would need a different solution. Yes you guessed it, a lookup table. Why would you need this? Because although computers can do billions of trig functions in seconds, that little tiny PIC you have on your omni-wheel robot cannot. So here is how to do it. Copy/paste my PIC C code below and change the syntax to whatever language you are using. Too easy, huh? =P To call the function just say: output=anglookuptable(the_angle_in_degrees,0);// 0 for cosine, 1 for sine The Trigonometric Lookup Table Code: signed int angtable[10]={100,99,94,87,77,64,50,34,17,0}; //multiplied by 100 so no floating point math //trig lookup table, type 1 for cos, 0 for sin, degrees is from 0->360 signed int anglookuptable(long int degrees, int type); //trig lookup table, type 0 for cos, 1 for sin, degrees is from 0->360 signed int anglookuptable(long int degrees, int type)
{ //{100,99,94,87,77,64,50,34,17,0}

signed int c=1; signed int s=1; int i=(degrees/10); //includes 0 to 90 degrees

if (i > 9 && i < = 18) //between 91 and 180


{ i=18-i; c=-1; }

if (i > 18 && i< = 27) //between 181 and 270


{ i=i-18; c=-1; s=-1; }

if (i > 27 && i < =36) //between 271 and 360


{ i=36-i; s=-1; }

if (type==1) //cosine
{ //printf("%d\r\n",c*angtable[i]); return c*angtable[i]; }

else //sine
{ //printf("%d\r\n",s*angtable[9-i]); return s*angtable[9-i]; } } MICROCONTROLLER UART TUTORIAL

RS232 EIA232F TTL and USB Adaptor Examples Tx and Rx Baud Rate, Misc Asynchronous Tx Loop-Back Test $50 Robot UART

What is the UART? The UART, or Universal Asynchronous Receiver / Transmitter, is a feature of yourmicrocontroller useful for communicating serial data (text, numbers, etc.) to your PC. The device changes incoming parallel information (within the microcontroller/PC) to serial data which can be sent on a communication line. Adding UART functionality is extremely useful for robotics. With the UART, you can add an LCD, bootloading,bluetooth wireless, make a datalogger, debug code, test sensors, and much more! Understanding the UART could be complicated, so I filtered out the useless information and present to you only the useful need-to-know details in an easy to understand way . . . The first half of this tutorial will explain what the UART is, while the second half will give you instructions onhow to add UART functionality to your $50 robot. What is RS232, EIA-232, TTL, serial, and USB? These are the different standards/protocols used from transmitting data. They are incompatible with each other, but if you understand what each is, then you can easily convert them to what you need for your robot.

RS232 RS232 is the old standard and is starting to become obsolete. Few if any laptops even have RS232 ports (serial ports) today, with USB becoming the new universal standard for attaching hardware. But since the world has not yet fully swapped over, you may encounter a need to understand this standard. Back in the day circuits were noisy, lacking filters and robust algorithms, etc. Wiring was also poor, meaning signals became weaker as wiring became longer (relates to resistance of the wire). So to compensate for the signal loss, they used very high voltages. Since a serial signal is basically a square wave, where the wavelengths relate to the bit data transmitted, RS232 was standardized as +/-12V. To get both +12V and -12V, the most common method is to use the MAX232 IC (or ICL232 or ST232 - different IC's that all do the same thing), accompanied with a few capacitors and a DB9 connector. But

personally, I feel wiring these up is just a pain . . . here is a schematic if you want to do it yourself (instead of a kit):

EIA232F Today signal transmission systems are much more robust, meaning a +/-12V signal is unnecessary. The EIA232F standard (introduced in 1997) is basically the same as the RS232 standard, but now it can accept a much more reasonable 0V to 5V signal. Almost all current computers (after 2002) utilize a serial port based on this EIA-232 standard. This is great, because now you no longer need the annoying MAX232 circuit! Instead what you can use is something called the RS232 shifter - a circuit that takes signals from the computer/microcontroller (TTL) and correctly inverts and amplifies the serial signals to the EIA232F standard. If you'd like to learn more about these standards, check out this RS232 and EIA232 tutorial (external site). The cheapest RS232 shifter I've found is the $7RS232 Shifter Board Kitfrom SparkFun. They have schematics of their board posted if you'd rather make your own. This is the RS232 shifter kit in the baggy it came in . . .

And this is the assembled image. Notice that I added some usefulwire connectorsthat did not come with the kit so that I may easily connect it to the headers on my microcontroller board. Also notice how two wires are connected to power/ground, and the other two are for Tx and Rx (I'll explain this later in the tutorial).

TTL and USB The UART takes bytes of data and transmits the individual bits in a sequential fashion. At the destination, a second UART re-assembles the bits into complete bytes.

You really do not need to understand what TTL is, other than that TLL is the signal transmitted and received by your microcontroller UART. This TTL signal is different from what your PC serial/USB port understands, so you would need to convert the signal. You also do not really need to understand USB, other than that its fast becoming the only method to communicate with your PC using external hardware. To use USB with your robot, you will need an adaptor that converts to USB. You can easily find converters under $20, or you can make your own by using either the FT232RL or CP2102 IC's.

Signal Adaptor Examples Without going into the details, and without you needing to understand them, all you really need to do is just buy an adaptor. For example: TTL -> TTL to RS232 adaptor -> PC TTL -> TTL to EIA-232 adaptor -> PC TTL -> TTL to EIA-232 adaptor -> EIA-232 to USB adaptor -> PC TTL -> TTL to USB adaptor -> PC TTL -> TTL to wireless adaptor -> wireless to USB adaptor -> PC If you wanted bluetooth wireless, get a TTL to bluetooth adaptor, or if you want ethernet, get a TTL to ethernet adaptor, etc. There are many combinations, just choose one based on what adaptors/requirements you have. For example, if your laptop only has USB, buy a TTL to USB adaptor as shown with mySparkFun Breakout Board for CP2103 USB:

There are other cheaper ones you can buy today, you just need to look around. On the left of this below image is my $15 USB to RS232 adaptor, and the right cable is my RS232 extension cable for those robots that like to run around:

Below is my USB to wireless adaptor that I made in 2007 (although now companies sell them wired up for you). It converts a USB type signal to a TTL type signal, and then my

Easy Radio wireless transmitter converts it again to a method easily transmitted by air to my robot:

And a close-up of the outputs. I soldered on a male header row and connected the ground, Tx, and Rx to my wireless transmitter. I will talk about Tx and Rx soon:

Even my bluetooth transceiver has the same Tx/Rx/Power/Ground wiring:

If you have a CMUcam or GPS, again, the same connections. Other Terminology . . .

Tx and Rx As you probably guessed, Tx represents transmit and Rx represents receive. The transmit pin always transmits data, and the receive pin always receives it. Sounds easy, but it can be a bit confusing . . . For example, suppose you have a GPS device that transmits a TTL signal and you want to connect this GPS to your microcontroller UART. This is how you would do it:

Notice how Tx is connected to Rx, and Rx is connected to Tx. If you connect Tx to Tx, stuff will fry and kittens will cry. If you are the type of person to accidentally plug in your wiring backwards, you may want to add a resistor of say ~2kohm coming out of your UART to each pin. This way if you connect Tx to Tx accidentally, the resistor will absorb all the bad ju-ju (current that will otherwise fry your UART).

Tx pin -> connector wire -> resistor -> Rx pin And remember to make your ground connection common!

Baud Rate Baud is a measurement of transmission speed in asynchronous communication. The computer, any adaptors, and the UART must all agree on a single speed of information 'bits per second'. For example, your robot would pass sensor data to your laptop at 38400 bits per second and your laptop would listen for this stream of 1s and 0s expecting a new bit every 1/38400bps = 26us (0.000026 seconds). As long as the robot outputs bits at the predetermined speed, your laptop can understand it. Remember to always configure all your devices to the same baud rate for communication to work! Data bits, Parity, Stop Bits, Flow Control The short answer: don't worry about it. These are basically variations of the signal, each with long explanations of why you would/wouldn't use them. Stick with the defaults, and make sure you follow the suggested settings of your adaptor. Usually you will use 8 data bits, no parity, 1 stop bit, and no flow control - but not always. Note that if you are using a PIC microcontroller you would have to declare these settings in your code (google for sample code, etc). I will talk a little more about this in coming sections, but mostly just don't worry about it. Bit Banging What if by rare chance your microcontroller does not have a UART (check the datasheet), or you need a second UART but your microcontroller only has one? There is still another method, called bit banging. To sum it up, you send your signal directly to a digital input/output port and manually toggle the port to create the TTL signal. This method is fairly slow and painful, but it works . . .

Asynchronous Serial Transmission As you should already know, baud rate defines bits sent per second. But baud only has meaning if the two communicating devices have a synchronized clock. For example, what if your microcontroller crystal has a slight deviation of .1 second, meaning it thinks 1 second is actually 1.1 seconds long. This could cause your baud rates to break! One solution would be to have both devices share the same clock source, but that just adds extra wires . . . All of this is handled automatically by the UART, but if you would like to understand more, continue reading . . .

Asynchronous transmission allows data to be transmitted without the sender having to send a clock signal to the receiver. Instead, the sender and receiver must agree on timing parameters in advance and special bits are added to each word which are used to synchronize the sending and receiving units. When a word is given to the UART for Asynchronous transmissions, a bit called the "Start Bit" is added to the beginning of each word that is to be transmitted. The Start Bit is used to alert the receiver that a word of data is about to be sent, and to force the clock in the receiver into synchronization with the clock in the transmitter. These two clocks must be accurate enough to not have the frequency drift by more than 10% during the transmission of the remaining bits in the word. (This requirement was set in the days of mechanical teleprinters and is easily met by modern electronic equipment.)

When data is being transmitted, the sender does not know when the receiver has 'looked' at the value of the bit - the sender only knows when the clock says to begin transmitting the next bit of the word. When the entire data word has been sent, the transmitter may add a Parity Bit that the transmitter generates. The Parity Bit may be used by the receiver to perform simple error checking. Then at least one Stop Bit is sent by the transmitter. When the receiver has received all of the bits in the data word, it may check for the Parity Bits (both sender and receiver must agree on whether a Parity Bit is to be used), and then the receiver looks for a Stop Bit. If the Stop Bit does not appear when it is supposed to, the UART considers the entire word to be garbled and will report a Framing Error to the host processor when the data word is read. The usual cause of a Framing Error is that the sender and receiver clocks were not running at the same speed, or that the signal was interrupted.

Regardless of whether the data was received correctly or not, the UART automatically discards the Start, Parity and Stop bits. If the sender and receiver are configured identically, these bits are not passed to the host. If another word is ready for transmission, the Start Bit for the new word can be sent as soon as the Stop Bit for the previous word has been sent. In short, asynchronous data is 'self synchronizing'.

The Loop-Back Test The loop-back test is a simple way to verify that your UART is working, as well as to locate the failure point of your UART communication setup. For example, suppose you are transmitting a signal from your microcontroller UART through a TTL to USB converter to your laptop and it isn't working. All it takes is one failure point for the entire system to not work, but how do you find it? The trick is to connect the Rx to the Tx, hence the loop-back test. For example, to verify that the UART is outputting correctly:
o o o

connect the Rx and Tx of the UART together printf the letter 'A' have an if statement turn on a LED if 'A' is received

If it still doesn't work, you know that your code was the failure point (if not more than one failure point). Then do this again on the PC side using HyperTerminal, directly connecting Tx and Rx of your USB port. And then yet again using the TTL to USB adaptor. You get the idea . . . I'm willing to bet that if you have a problem getting it to work, it is because your baud rates aren't the same/synchronized. You may also find it useful to connect your Tx line to an oscilloscope to verify your transmitting frequency:

Top waveform: UART transmitted 0x0F Bottom waveform: UART received 0x0F

Adding UART Functions to AVR and your $50 Robot To add UART functionality to your $50 robot(or any AVR based microcontroller) you need to make a few minor modifications to your code and add a small amount of extra hardware. Full and Half Duplex Full Duplex is defined by the ability of a UART to simultaneously send and receive data. Half Duplex is when a device must pause either transmitting or receiving to perform the other. A Half Duplex UART cannot send and receive data simultaneously. While most microcontroller UARTs are Full Duplex, most wireless transceivers are Half Duplex. This is due to the fact that it is difficult to send two different signals at the same time under the same frequency, resulting in data collision. If your robot is wirelessly transmitting data, in effect it will not be able to receive commands during that transmission, assuming it is using a Half Duplex transmitter. Please check out the step-by-step instructions onhow to add UART functionality to your $50 robot >>>.
$50 ROBOT UART TUTORIAL

Adding UART Functions to AVR and your $50 Robot To add UART functionality to your $50 Robot(or any AVR based microcontroller) you need to make a few minor modifications to your code and add a small amount of extra hardware. Now of course I could just give you the code for you to use right away and skip this tutorial, or I can explain how and why these changes are made so you can 'catch your own fish' without me giving it to you in the future . . . Now about the speed increase . . . We will be using the maximum frequency that your microcontroller can handle without adding an external crystal. How do you know what that frequency is? From the datasheet of your ATmega8/ATmega168, we can find: "By default, the Internal RC Oscillator provides an approximate 8.0 MHz clock." Listed in the 'System Clock and Clock Options->Calibrated Internal RC Oscillator' section. Since we do not have an external crystal, we will configure the entire system (all individual components and code) to 8MHz. If you want a different frequency, I will also show you how to change it to your frequency of choice.

Open up your makefile, and add in rprintf.c and uart.c if it isn't already there:

# List C source files here. (C dependencies are automatically generated.) SRC = $(TARGET).c a2d.c buffer.c rprintf.c uart.c These are AVRlibfiles needed to do the hard UARTand printf commands for you. If you are using the $50 Robot source code, then you already have AVRlib installed and ready to use (so don't worry about it). Otherwise, read theinstructions on installing AVRlib.

Also, look for this line towards the top: F_CPU = 3686400 and replace it with your desired frequency. In this example we will use: F_CPU = 8000000

Open up SoR_Utils.h and add in two AVRlib files, uart.h and rprintf.h:
//AVRlib includes #include "global.h" #include "buffer.h" #include "uart.h" #include "rprintf.h" //#include "timerx8.h" #include "a2d.h" // A/D // global settings // buffer function library // uart function library // printf library // timer library (timing, PWM, etc) converter function library

Now your compiler knows to use these AVRlib files. If you aren't using SoR_Utils.h, just add these lines at the top of your code where you declare includes. I recommend not using the timer because its default interrupt settings will cause your servos and UART to spass out . . .

Open up global.h and set the CPU to 8Mhz:


#define F_CPU 8000000 // 8MHz processor

Now power up your microcontroller and connect it to your AVR as if you were to program it:

Now BE VERY CAREFUL IN THIS STEP. If you push the wrong fuse, there is a possibility you could permanently disable your microcontroller from being further programmed. BAD JU-JU!!! Click the Fuses tab (see below image), and uncheck 'Divide clock by 8'. Having this setting checked makes your clock 8 times slower. Having a slower clock makes your microcontroller more efficient, and unless your algorithm requires a lot of math, a fast processor isn't needed. But in this case we want a fast UART speed, so we need the faster clock. Now check the 3rd 'Int. RC Osc. 8Mhz'. By default this should already be checked, but I'm noting it just in case. If you were using a crystal or a different frequency, just scroll down in the Fuses tab for other options. as shown here:

Then push Program and you should get a message that looks something like this:
Entering programming mode.. OK! Writing fuses .. 0xF9, 0xDF, 0xE2 .. OK!

Reading fuses .. 0xF9, 0xDF, 0xE2 .. OK! Fuse bits verification.. OK Leaving programming mode.. OK!

Please note that the $50 Robot was designed for the lower clock speed. What this means is that all your functions that involved time will operate 8 times faster. In terms of processing this is great! But all your delay and servo commands must be multiplied by 8 for them to work again.

For example, delay_cycles(500); servo_left(45); must now be delay_cycles(4000);//500*8 servo_left(360);//45*8 Or if you are really really lazy and don't care about timing error, go into SoR_Utils.h and change this:
void delay_cycles(unsigned long int cycles) { while(cycles > 0) cycles--; }

to this:
void delay_cycles(unsigned long int cycles) { cycles=cycles*8;//makes delays take 8x longer while(cycles > 0) cycles--; }

Now we need to select a baud rate, meaning the speed at which data can be transferred. Typically you'd want to have a baud of around 115.2k. This is a very common speed, and is very fast for what most people need.

But can your microcontroller handle this speed? To find out, check the datasheet. For my ATmega168, I went to the 'USART0 -> Examples of Baud Rate Setting' section and found a chart that looks something like this:

I immediately found the column marked 8.0000 MHz (the internal clock of your microcontroller), which I circled in green for you. Then I went to the row marked as 115.2k, marked in blue. Now what this means is that your UART can do this baud rate, but notice that it says the error is 8.5%. This means that there is a good chance that there will be syncing problems with your microcontroller. Error arises from the fact that Fosc is usually not a standard UART frequency, so the baud rate when you divide Fosc by some number isn't going to be a standard UART frequency. Rather than bothering with this possible problem, I decided to go down a few rates to 57.6k (circled in red). 3.5% could still be a bit high, so if you have problems with it, go down again to 38.4k with a .2% error (mostly negligable). So what error rate is considered optimal or best? It depends entirely on your hardware and so I don't have an answer for you. If you want to learn more, feel free to read about howasynchronous serial transmission is 'self synchronizing'. The other option is to set the U2X register bit to 1 (default is 0), meaning the error at 115.2k is now only -3.5%. This doubles the UART speed, which can sometimes make it possible for you to achieve baud rates closer to standard values. If you take a look at the formulas for calculating baud rate from the value of the UBRR (baud rate) register, the error rates should make sense: U2X = 0 => baud rate = Fosc / 16 / (UBRR + 1) U2X = 1 => baud rate = Fosc / 8 / (UBRR + 1)

Be aware that even if your microcontroller UART can operate at your desired baud, the adaptorsyou use might not be. Check the datasheets before deciding on a baud rate!!! It turns out my RS232 Shifter is rated for 38400 in the datasheet, so thats the baud I ended up with to do this tutorial.

Now after deciding on baud, in the file SoR_Utils.h (or in your main if you want) add the following code to your initialization (using your chosen BAUD rate):
uartInit(); // initialize UART uartSetBaudRate(38400);// set UART baud rate rprintfInit(uartSendByte);// initialize rprintf system

Relax, the hard part is done!

Now we need to add a line in our main() code that uses the UART. Add this somewhere in your main() code:
//read analog to display a sensor value rangefinder = a2dConvert8bit(5); //output message to serial (use HyperTerminal) rprintf("Hello, World! My Analog: %d\r\n", rangefinder);

You don't actually need to output a sensor value, but figured I'd show you how now so you can try it out on your robot sensors.

The last programming step.

yaaaaayyyy! =) Save and then compile your code like you normally would:

Software is done!

Now for the hardware.

I will assume you already read about adaptors for the UART and know that you have just four wires to connect to your robot:
5V Ground Tx Rx

Now plug the power connections (regulated 5V and ground) into an unused header on your circuit board just like you would a sensor (the sensors use regulated 5V). Then plug the Tx of your adaptor into the Rx header of your microcontroller, and then the Rx of your adaptor into the Tx header of your microcontroller. Don't know which pin is which? Check the datasheet for the pinout and look for RXD and TXD. Or look at the $50 Robot schematic.

It just so happens that you probably have servos on pins 2 and 3 - oh no! Don't worry, just move your servos onto a different pin, such as 4 (PD2), 5 (PD3), and/or 6 (PD4). To move the servos to a different port in code, find this in SoR_Utils.h:
void servo_left(signed long int speed) { PORT_ON(PORTD, 0); delay_cycles(speed); PORT_OFF(PORTD, 0);//keep off }

and change '0' to the new port. If you wanted port PD2, then use '2':
PORT_ON(PORTD, 2); delay_cycles(speed); PORT_OFF(PORTD, 2);

Don't forget to make that change for your other servos, too. (and of course, save and compile again)

This is what my hardware plugged in looks like (click to enlarge):

Notice how I labeled my wires and pins so I didn't get them confused (with the result of maybe frying something). I used wire connectors to connect it all. At the top right is my RS232 Shifter and at the bottom right is my RS232 to USB adaptor.

Let's do a quick test. Chances are your adaptor will have a transmit/receive LED, meaning the LED turns on when you are transmitting or receiving.

Turn on your robot and run it. Does the LED turn on? If not, you might be doing something wrong . . . My adaptor has two LEDs, one for each line, and so my Rx LED flashes when the robot transmits.

Now you need to set up your computer to receive the serial data and to verify that its working.

If you are using an adaptor, make sure it is also configured for the baud rate you plan to use. Typically the instructions for it should tell you how, but not always. So in this step I'll show you one method to do this. Click: Start->Settings->Control Panel->System A new window will come up called 'System Properties'. Open the Hardware tab and click device manager. You should see this:

Go to Ports, select the one you are using, and right click it. Select Properties. A new window should come up, and select the Port Settings tab:

Now configure the settings as you like. I'd recommend using the settings I did, but with your desired baud rate.

This is the last step!

To view the output data, use the HyperTerminal tutorial to set it up for your desired baud rate and com port. Make sure you close out the AVR programming window so that you can free up the com port for HyperTerminal! Two programs cannot use the same com port at the same time (a common mistake I always make). Now if you did everything right, you should start seeing text show up in hypterminal:

If it isn't working, consider doing aloop-back test for your UART debugging needs. You're finished! Good job!
PROGRAMMING - VARIABLES

C Variables Controlling variables in your program are very important. Unlike with programming computers (such as in C++ or Java) where you can call floats and long ints left and right, doing so on a microcontroller would cause serious problems. With microcontrollers you need to always be careful about limited memory, limited processing speeds, overflows, signs, and rounding. C Variable Reference Chart
definition short int char int bit 2^x 1 -bit 8-bit 8-bit number span allowed 0, 1 (False, True) a-z, A-Z, 0-9 0 .. 255

unsigned int signed int long int unsigned long int signed long int float

8-bit 8-bit 16-bit 16-bit 16-bit 32-bit

0 .. 255 -128 .. 127 0 .. 65535 0 .. 65535 -32768 .. 32767 1.2 x 10^(-38) .. 3.4 x 10^(38)

Limited Memory Obviously the little microcontroller thats the size of a quarter on your robot isn't going to have the practically infinite memory of your PC. Although most microcontrollers today can be programmed without too much worry on memory limits, there are specific instances where it would be important. If your robot does mapping, for example, efficient use of memory is important. You always need to remember to use the variable type that requires the least amount of memory yet still stores the information you need. For example, if a variable is only expected to store a number from 100 to 200, why use a long int when just an int would work? Also, the fewer bits that need to be processed, the faster the processing can occur. Limited Processing Speeds Your microcontroller is not a 2.5 GHz processor. Don't treat it like one. Chances are its a 4 MHz to 20 MHz processor. This means that if you write a mathematically complex equation, your microcontroller could take up to seconds to process it. By that time your robot might have collided into a cute squirrel without even knowing!!! With robots you generally want to process yoursensor data about 3 to 8 times per second, depending on the speed and environment of your robot. This means you should avoid all use of 16-bit and 32-bit variables at all costs. You should also avoid all use of exponents and trigonometry - both because they are software implemented and require heavy processing. What if your robot requires a complex equation and there is no way around it? Well, what you would do is take shortcuts. Use lookup tables for often made calculations, such as for trigonometry. To avoid floats, instead of 13/1.8 use 130/18 by multiplying both numbers by 10 before dividing. Or round your decimal places - speed is almost always more important than accuracy with robots. Be very careful with the order of operations in your equation, as certain orders retain higher accuracy than others. Don't even think about derivatives or integrals. Overflows An overflow is when a value for a variable exceeds the allowed number span. For example, an int for a microcontroller cannot exceed 255. If it does, it will loop back.
unsigned int variable = 255; variable = variable+1; //variable will now equal 0, not 256!!!

To avoid this overflow, you would have to change your variable type to something else, such as a long int. You might also be interested in reading about timers, as accounting for timer overflows is often important when using them. Signs Remember that signed variables can be either negative or positive but unsigned variables can only be positive. In reality you do not always need a negative number. A positive number can often suffice because you can always arbitralily define the symantics of a variable. For example, numbers between 0 and 128 can represent negatives, and numbers between 129 and 255 can represent positive numbers. But there will often be times when you would perfer to use a negative number for intuitive reasons. For example, when I program a robot, I use negative numbers to represent a motor going in reverse, and a positive for a motor going forward. The main reason I would avoid using negative numbers is simply because a signed int overflows at 128 (or -128) while unsigned overflows at 256 (or 0). Extras For further reading of programming variables for robots, have a look at thefuzzy logic tutorial. Examples of Variables in C Code: Defining variables:
#define ANGLE_MAX 255//global constants must be defined first in C int answer; signed long int answer2; int variable = 3; signed long int constant = -538;

variable math examples (assume answer is reset after each example):


answer = variable * 2; //answer = 6

answer = variable / 2; //answer = 1 (because of rounding down) answer = variable + constant; //answer = 233 (because of overflows and signs) answer2 = signed long int(variable) + constant; //answer2 = -535 answer = variable - 4; //answer = 255 (because of overflow) answer = (variable + 1.2)/3; //answer = 1 (because of rounding)

answer = variable/3 + 1.2/3; //answer = 1 (because of rounding and order of operations) answer = answer + variable; //answer = RANDOM GARBAGE (because answer is not defined)
WAVEFRONT ALGORITHM

Robot Mapping and Navigation The theories behind robot maze navigation is immense - so much that it would take several books just to cover the basics! So to keep it simple this tutorial will teach you one of the most basic but still powerful methods of intelligent robot navigation. For reasons I will explain later, this robot navigation method is called the wavefront algorithm. There are four main steps to running this algorithm. Step 1: Create a Discretized Map Create an X-Y grid matrix to mark empty space, robot/goal locations, and obstacles. For example, this is a pic of my kitchen. Normally there isn't a cereal box on the floor like that, so I put it there as an example of an obstacle:

Using data from the robot sensor scan, I then lay a basic grid over it:

This is what it looks like with all the clutter removed. I then declare the borders (red) impassable, as well as enclose any areas with objects as also impassable (also blocked off by red). Objects, whether big or small, will be treated as the entire size of one grid unit. You may either hardcore the borders and objects into your code, or your robot can add

the objects and borders as it detects them with a sensor. What you get is an X-Y grid matrix, with R representing where the robot is located:

But of course this is not what it really looks like in robot memory. Instead, it looks much more like this matrix below. All I did was flatten out the map, and stored it as a matrix in my code. Use 0 to represent impassable and 1 to represent the robot (marked as R on the image).

Note: In my source code I used the below values. My examples here are just simplifications so that you can more easily understand the wavefront algorithm.
// WaveFront Variables int nothing=0; int wall=255; int goal=1; int robot=254;

An example of a map matrix in C looks something like this:


//X is horizontal, Y is vertical int map[6][6]= {{0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}};

Step 2: Add in Goal and Robot Locations Next your robot must choose its goal location, G (usually preprogrammed for whatever reason). The goal could be your refrigerator, your room, etc. To simplify things, although not optimal, we are assuming this robot can only rotate 90 degrees. In my source code I call this function: new_state=propagate_wavefront(robot_x,robot_y,goal_x,goal_y); robot_x and robot_y marks the robots' coordinates, and goal_x and goal_y is of course the goal location. Step 3: Fill in Wavefront This is where it gets a bit hard so bare with me here. In a nutshell the algorithm will check node by node, starting at the top left, which nodes it is next to. Ignore walls, look at nodes around your target node, then count up. For example, if a bordering node has the number 5, and its the lowest bordering node, make target node a 6. Keep scanning the matrix until the robot node borders a number. Following this pseudocode Ill show you graphic examples. Pseudocode:
check node A at [0][0] now look north, south, east, and west of this node (boundary nodes) if (boundary node is a wall) ignore this node, go to next node B else if (boundary node is robot location && has a number in it) path found! find the boundary node with the smallest number return that direction to robot controller robot moves to that new node else if (boundary node has a goal) mark node A with the number 3 else if (boundary node is marked with a number) find the boundary node with the smallest number mark node A with (smallest number + 1) if (no path found) go to next node B at [0][1] (sort through entire matrix in order) if (no path still found after full scan) go to node A at [0][0] (start over, but do not clear map) (sort through entire matrix in order) repeat until path found

if (no path still found && matrix is full) this means there is no solution clear entire matrix of obstacles and start over this accounts for moving objects! adaptivity!

Here is a graphic example. The goal and robot locations are already marked on the map. Now going through the matrix one node at a time, I've already scanned through the first 2 columns (X). On column 3, I scanned about halfway down until I reached the 5th node. Checking bordering nodes it is next to the Goal. So I mark this node with a 3 as shown.

Continuing on the 3rd column, I keep going down node by node. I check for bordering nodes, and add +1 to the target node. As you can see, the rest of the column gets filled in. Notice the 'wave' action, yet? This is why it's called a wavefront. It has also been called the Brushfire algorithm because it spreads like a brushfire . . .

Now go to the 4th column and start checking each node. When you get to the 4th row, your target node borders the goal. Mark it with a 3. Then keep scanning down. Ignore the goal, and ignore walls. On the 9th row, you will notice the target node borders the number 7 on the left. Its the lowest value bordering node, so 7 + 1 = 8. Mark this target node as 8.

Then going to the 10th row you notice the target node is the robot location. If the robot location borders a filled in number (in this case, the number 8) then the algorithm is finished. A full path has been found!

Step 4: Direct Robot to Count Down Now that a solution exists, tell your robot to drive to the square with the current number minus one. In this case, the current number was 9, so the robot must drive to square 8. There are multiple squares labeled 8, so the robot can go to either one. In this case, the 8 square on the left is more optimal because it results in fewer rotations of the robot. But for simplicity, it really doesn't matter. Then have your robot go to box 7, then box 6, then 5, and so on. Your robot will drive straight to the goal as so. Adaptive Mapping For adaptive mapping, your robot does not always know where all obstacles are located. In this case, it will find a 'solution' that might not actually work. Perhaps it didn't see all obstacles, or perhaps something in the environment moved? So what you do is: 1) have your robot scan after each move it makes 2) update the map with new or removed obstacles

3) re-run the wavefront algorithm 4) and then react to the new updated solution. If no solution is found at all, delete obstacles from your map until a solution is found. In my source code, the robot deletes all obstacles when no solution is found - not always desirable, but it works. Results To test my algorithm, I put it on my modded iRobot Create platform. It uses a scanning Sharp IR as its only sensor.
MODDING THE iROBOT CREATE

The iRobot Create The iRobot Create is a commercial robot hacked up from a previous robot vacuum cleaner they produced. They have been trying to encourage the hobbyist and educational community to start developing these things and through one of their schemes I landed a free Create to toy with. A video of the robot running around my house using the default programming: My end goal of this project was to implement real-time SLAM (simultaneous localization and mapping) onto the robot. But I made this plan before I was aware of the capabilities (i.e. limitations) of the iRobot. For a start, it only uses the ATmega168 microcontroller. This is incredibly slow with huge memory limitations!

Instead I decided to implement real-time adaptive mapping, and just have it update the map with new scans. Its not matching to an old map such as in SLAM, but it is still updating to remove the accumulated navigation errors. The Create Command Structure To communicate with the Create, you must send serial commands to its magical green box circuit board thingy inside the Create. Upon breaking this made-in-China box open I still couldn't make out most of the electronics . . .

So to send these serial commands, we must program the Command Module. This green box thing has an ATmega168 microcontroller inside, a common and easier microcontroller to use. This is the same microcontroller Im using on the $50 Robot, so all the source code is cross-platform.

Now so to command the Create to do stuff, all you do is occasionally send commands to it from your ATmega168. You can also ask for sensor data from it using the delayAndUpdateSensors(update_delay);command in my source code. The Create Encoders The Create does have high resolution encoders, but I'm not sure what the resolution is because its not in any of the manuals.

Yet despite the high-resolution encoders, they are still inaccurate. I'm not sure if its dust or what, but the counts were constantly skipping. I wouldn't rely on the encoder at all. The sample softwarethat comes with the iRobot Create does not effectively use the encoders. It only does a point and shoot kind of algorithm for angle rotations, and does a 'good enough' algorithm for distance measurement. The source code doesn't even use the 156 (wait for distance) or 157 (wait for angle) commands! Of course, for their uses, the source didn't need these commands. But I need perfect encoder measurements for encoder based navigation. So I had to write up my own stuff . . . First I tried implementing the 156 and 157 commands but for some reason it occasionally didn't work. Strange things happened, even resulting in program crashes. And even when it did work, the encoder measurements were still error-ing (its a word because I made it up). The best I could is with my own written methods. Use these functions for somewhat accurate Create movement:
//rotate clockwise (CW) and counter-clockwise (CCW) rotate_CCW(180,250); //(angle, rotation_speed) rotate_CW(90,150); //(angle, rotation_speed) //move straight (use negative velocity to go in reverse) straight(cell_size,100); //(distance, velocity) stop();//don't make me explain this function!!!

Although it doesn't correct for overshoot or undershoot, it at least keeps track of it to account for it in the next motion. This still results in error, but not as much as before. The Stampy Edge Detection Algorithm To start off my robots' adventures, I needed to implement a basic highly reactive algorithm to test out my proto-code and sensor setup. I decided to implement my Stampy Edge Detection algorithm. I originally developed this algorithm for my Stampy sumo robot so that it can quickly locate the enemy robot. But I also found many other neat tricks that the algorithm can do, such as with my outdoor line following robot that uses a singlephotoresistor! The concept is simple. A scanning Sharp IR rangefinder does only two things: If no object is seen, the scanner turns right. If an object is seen, the scanner turns left. As shown, the scanner goes left if it sees a googly-eyed robot. If it doesn't detect it, the scanner turns right until it does. As a result, the scanner converges on the left edge of the googly-eyed robot:

Now the robot always keeps track of the angle of the scanner. So if the scanner is pointing left, the robot turns left. If the scanner is pointing right, the robot turns right. And of course, if the scanner is pointing straight ahead, the robot just drives straight ahead. For more detailed info, visit my write-up on my Stampy sumo robot. Building the Scanner The first step to the hardware is to make a mount for the scanner. Below are the parts you need, followed by a video showing how everything is assembled.

Wiring up the Hardware Now I needed to get some wiring to attach the servo and Sharp IR to the Create robot serial port. Going through my box of scrap wire, I found this.

To make it, I just took some serial cable and put some headers on it with heatshrink. You can use whatever you want.

Then I plugged it into the center serial port as so:

How do I know which pins in the serial port do I use? Well, I looked up the pin-out in the manual:

To distribute wiring (with a power bus), you need four pins: power and ground, an analog pin for the sensor, and a digital output pin for the servo. What does that mean? Connect all the grounds (black) to each other, and connect all the power lines (red) to each other. Each signal line (yellow) gets its own serial cable wire. To make a power bus, I got a piece of breadboard and male headers as such:

Then I used a Dremel to cut off a small piece of it, and soldered on the headers with the proper power distributing wiring:

Then I plugged in everything to the power bus as so. Refer to the pin-out in the previous step to make sure where everything plugs in to.

The last step is to attach the servo. You need to have the Sharp IR sensor centrally located, and the only place I can find available was that large empty space in the center (it was a no-brainer). I didn't want to drill holes or make a special mount (too much unnecessary effort), so I decided to use extra strength double sided sticky tape (see below image). My only concern about this tape was that I may have difficulties removing the servo in the future . . . (its not a ghetto mount, this stuff really holds).

To attach the servo on, I cut a piece off and stuck it to the bottom of my servo:

And then I stuck the other side of the tape onto the robot as so:

Programming the iRobot Create with Mod There are many ways to program your Create. iRobot recommends you usingWinAVR (22.8mb) as do I. Install that program. But I prefer to program using the IDE calledAVR Studio, version 4.13, build 528 (73.8mb). An optional install. I wont go into detail in programming the Create because themanual tells you how. But if you are still curious how I did it, here is my write-up onhow to program an AVR. Unfortunately I was never able to get AVR Studio to communicate with my Create . . . so instead I used AVR Dude (that came with WinAVR). To do this, I just opened up a command window and did stuff like this (click to enlarge):

avrdude -p atmega168 -P com9 -c stk500 -U flash:w:iRobot.hex Again, its all in the Create manuals. To help you get started, here is my source code: iRobot Create Sharp IR Scanner source code (August 20th, 2007) After uploading the program, just turn on your robot, push the black button, and off to attacking cute kittens it goes! Enjoy the video: Yes, it is programmed to chase a can of beer . . . Robot Navigation The next step to programming more intelligence into your robot is for it to have memory, and then to make plans with that memory. In this case, it will store maps of places the robot can 'see'. Then by using these maps, the robot can then navigate around obstacles intelligently to reach goals. The robot can also update these maps so that it accounts for people moving around in the area. The method I used for this is the wavefront algorithm, using a Sharp IR scanner as my sensor. The scanner would do a high-resolution scan so that it can find even the smallest of objects. If anything is detected in a block, even something as thin as a chair leg (the enemy of robots), it will consider the entire block impassable. Why would I do this? Because the memory on a microcontroller is limited and cannot store massive amounts of data. In fact, it was incapable of storing maps greater than 20x20!!! Plus, the increase in robot movement efficiency does not compare to the much larger increase in computational inefficiency.

A quick example of what the map would look like:

I decided that the 'world' should be discretized into square units exactly the same size as the robot. This way each movement the robot takes will be one robot unit length. Why did I do this? Computational simplicity. Higher map resolution requires more processing power. But for a home scenario high accuracy isn't required, so no need to get complicated. As you can see, each terrain situation requires a different optimal discretization . . . Remember to check out my wavefront algorithm tutorial if you want to learn more. Results Enjoy! Notice that its an unedited video: And what you have really been waiting for, the WaveFront Source Code: iRobot Create WaveFront Source Code (September 9th 2007) Also, videos showing whats inside a Roomba vacuum, just to see what you can hack out of it.

Enjoy! Notice that its an unedited video: Yes, I do realize I have a lot of cereal boxes . . . I actually have more . . . I like cereal =)

And what you have been really waiting for, the WaveFront Source Code: iRobot Create WaveFront Source Code (September 9th 2007) Recursive WaveFront There is another way to do the wavefront algorithm using recursive functions. This method I've been told is inefficient, especially on very large maps. But anyway it doesn't matter because microcontrollers cannot do recursive functions. This is an animation of the recursive wavefront process:

I won't go in to detail on this, but it's obviously a 'wavefront'! Wavefront Simulation It can be quite time consuming to test out robot navigation algorithms on the actual robot. It takes forever to tweak the program, compile, upload to robot, set up robot, turn it on, watch it run, then figure out why it failed . . . the list goes on. Instead, it is much easier to do this with simulation. You write the program, compile, then run it locally. You get an instant output of results to view. The disadvantage to simulation is that it's really hard to simulate the environment as well as get the robot physics perfect, but for most applications simulation is best to work out all the big bugs in the algorithm. This is a simulation I did showing a robot doing a wavefront, moving to the next location, then doing another wavefront update. For a robot (R) moving through terrain with moving objects (W), the robot must recalculate the wavefront after each move towards the goal (G). I didn't implement the adaptive mapping in simulation, just the wavefront and robot movement.

If you want to see the entire simulation, check out thesimulation results.txt
Starting Wavefront Old 0 W 0 W 0 W 0 W 0 5 0 0 Map: 0 0 0 0 0 0 0 0 W 0 0 0 4 3 2 0 0 0 W 0 0 0 0 0

Unpropagation Complete: R W 0 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: R W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep R W G 0 W 2 0 W 3 0 W 4 0 0 5 0 0 6 Sweep R W G 0 W 2 0 W 3 0 W 4 0 6 5 0 7 6 Sweep R W G 0 W 2 0 W 3 0 W 4 7 6 5 8 7 6 Sweep R W G 0 W 2 0 W 3 #: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 4 2 3 W 3 4 5 4 W 6

8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Sweep R W G 0 W 2 9 W 3 8 W 4 7 6 5 8 7 6 #: 5 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep #: 6 R W G 2 3 W 10 W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Finished Wavefront: R W G 2 3 W 10 W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old Map: R W G 2 3 W 10 W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Unpropagation Complete: 0 W G 0 0 W R W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W R W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G R W 2 0 W 3 0 W 4 #: 1 2 3 W 3 4 5 4 W 6 5 6 7

0 0 5 6 7 8 0 0 6 7 8 9 Sweep 0 W G R W 2 0 W 3 0 W 4 0 6 5 0 7 6 Sweep 0 W G R W 2 0 W 3 0 W 4 7 6 5 8 7 6 Sweep 0 W G R W 2 0 W 3 8 W 4 7 6 5 8 7 6 Sweep 0 W G R W 2 9 W 3 8 W 4 7 6 5 8 7 6 #: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 4 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 5 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W R W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old 0 W R W 9 W 8 W 7 6 8 7 Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8 W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 R W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 R W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G 0 W 2 R W 3 0 W 4 0 0 5 0 0 6 Sweep 0 W G 0 W 2 R W 3 0 W 4 0 6 5 0 7 6 Sweep 0 W G 0 W 2 R W 3 0 W 4 7 6 5 8 7 6 Sweep 0 W G 0 W 2 R W 3 8 W 4 7 6 5 8 7 6 #: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 4 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 R W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old 0 W 0 W R W 8 W 7 6 8 7 Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8 W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 R W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 R W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G 0 W 2 0 W 3 R W 4 0 0 5 0 0 6 Sweep 0 W G 0 W 2 0 W 3 R W 4 0 6 5 0 7 6 Sweep 0 W G 0 W 2 0 W 3 R W 4 7 6 5 8 7 6 #: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 R W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old 0 W 0 W 0 W R W 7 6 8 7 Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8 W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G 0 W 2 0 W 3 0 W 4 R 0 5 0 0 6 Sweep 0 W G 0 W 2 0 W 3 0 W 4 R 6 5 0 7 6 #: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9 #: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W 4 5 6 7 R 6 5 6 7 8 0 7 6 7 8 9 Old 0 W 0 W 0 W 0 W R 6 0 7 Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8 W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 Adding Goal:

0 0 0 0 0 0

W W W W R 0

G 0 0 0 0 0

0 0 0 0 0 0

0 0 W 0 0 0

W 0 0 0 0 0

Sweep 0 W G 0 W 2 0 W 3 0 W 4 0 R 5 0 0 6

#: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W 4 5 6 7 0 R 5 6 7 8 0 0 6 7 8 9 Old 0 W 0 W 0 W 0 W 0 R 0 0 Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8 W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W 4 5 6 7 0 0 R 0 0 0 0 0 0 0 0 0 Old Map: 0 W G 2 3 W

0 0 0 0 0

W W W 0 0

2 3 4 R 0

3 4 5 0 0

4 W 6 0 0

5 6 7 0 0

Unpropagation Complete: 0 W G 0 0 W 0 W 2 0 0 0 0 W 3 0 W 0 0 W R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 2 0 0 0 0 W 3 0 W 0 0 W R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Old 0 W 0 W 0 W 0 W 0 0 0 0 Map: G 2 3 2 3 4 3 4 W R 0 0 0 0 0 0 0 0 W 5 6 0 0 0

Unpropagation Complete: 0 W G 0 0 W 0 W 2 0 0 0 0 W R 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 2 0 0 0 0 W R 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5

0 0 0 0

W W 0 0

R 0 0 0

0 0 0 0

W 0 0 0

0 0 0 0 W 5 0 0 0 0

Old 0 W 0 W 0 W 0 W 0 0 0 0

Map: G 2 3 2 3 4 R 0 W 0 0 0 0 0 0 0 0 0

Unpropagation Complete: 0 W G 0 0 W 0 W R 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W R 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W R 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Press any key to continue . . .

You can download a copy of mywavefront simulation software and source. I compiled the software using Bloodshed Dev-C++. If you want, you may also try a different C language compiler. You can also find wavefront code in BASIC andwavefront in Python posted on the forum.

Das könnte Ihnen auch gefallen