Sie sind auf Seite 1von 156

Robotics

Transforming the world...


Introduction
Introduction - Robotics
INTRODUCTION - ROBOTICS
 Robot Origin and Evolution

 Word robot was coined by a Czech novelist Karel Capek in a 1920.

 Robota in Czech is a word for worker or servant.

Evolution of robotics Industrial robot Humanoid robot


Introduction - Robotics
BRIEF HISTORY OF ROBOTICS
Introduction - Robotics
BRIEF HISTORY OF ROBOTICS
• Virtually anything that operates with some
degree of autonomy, usually under computer
control, has at some point been called a robot.
• Born from Earlier technologies

1. Teleoperators.
2. Numerically Controlled Milling Machines.
• The first robots essentially combined the
mechanical linkages of the teleoperator with the
autonomy and programmability of CNC
machines.
• Teleoperation indicates operation of a machine
at a distance and a device or machine is
operated by a person from a distance is called a
teleoperator.
Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS

• 1947 — the first servoed electric powered teleoperator is developed.


• 1948 — a teleoperator is developed incorporating force feedback.
• 1949 — research on numerically controlled milling machine is initiated.
• 1954 — George Devol designs the first programmable robot.
• 1956 — Joseph Engelberger, a Columbia University physics student, buys the rights to Devol‘s
robot and founds the Unimation Company.
• 1961 — the first Unimate robot is installed in a Trenton, New Jersey plant of General Motors to
tend a die casting machine.
• 1961 — the first robot incorporating force feedback is developed.
• 1963 — the first robot vision system is developed.
Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS

First Servoed Electric Powered Teleoperator First Programmable Robot


Introduction - Robotics
FIRST ROBOT BY UNIMATION
Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS
• 1971 — the Stanford Arm is developed at Stanford University
• 1973 — the first robot programming language (WAVE) is developed at Stanford
• 1974 — Cincinnati Milacron introduced the T 3 robot with computer control
• 1975 — Unimation Inc. registers its first financial profit
• 1976 — the Remote Center Compliance (RCC) device for part insertion in assembly is
developed at Draper Labs in Boston
• 1976 — Robot arms are used on the Viking I and II space probes and land on Mars
• 1978 — Unimation introduces the PUMA robot, based on designs from a General Motors study
• 1979 — the SCARA robot design is introduced in Japan
• 1981 — the first direct-drive robot is developed at Carnegie-Mellon University
• 1982 — Fanuc of Japan and General Motors form GM Fanuc to market robots in North
America
Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS

Stanford ARM PUMA Robot


Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS

Jason – Remotely operated Vehicle SACARA Design – Proposed by Japan


Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS
 1983 — Adept Technology is founded and successfully markets the direct drive robot

 1986 — the underwater robot, Jason, of the Woods Hole Oceanographic Institute, explores the
wreck of the Titanic, found a year earlier by Dr. Robert Barnard.
 1988 — Staubli Group purchases Unimation from Westinghouse

 1988 — the IEEE Robotics and Automation Society is formed

 1993 — the experimental robot, ROTEX, of the German Aerospace Agency (DLR) was flown
aboard the space shuttle Columbia and performed a variety of tasks under both teleoperated
and sensor-based offline programmed modes
 1996 — Honda unveils its Humanoid robot, a project begun in secret in 1986
Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS
 1997 — the first robot soccer competition, RoboCup-97, is held in Nagoya, Japan and draws 40
teams from around the world
 1997 — the Sojourner mobile robot travels to Mars aboard NASA‘s Mars PathFinder mission
 2001 — Sony begins to mass produce the first household robot, a robot dog named Aibo
 2001 — the Space Station Remote Manipulation System (SSRMS) is launched in space on board
the space shuttle Endeavour to facilitate continued construction of the space station
 2001 — the first telesurgery is performed when surgeons in New York performed a laparo-
scopic gall bladder removal on a woman in Strasbourg, France
 2001 — robots are used to search for victims at the World Trade Center site after the
September 11th tragedy
 2002 — Honda‘s Humanoid Robot ASIMO rings the opening bell at the New York Stock
Exchange on February 15th
Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS

Sojourner – Rover (Mobile Robot) Robocup - 97


Introduction - Robotics
MILESTONES IN THE HISTORY OF ROBOTICS

Aibo – Sony (Dog Robot) ASIMO – Honda Humanoid


Introduction - Robotics
DEFINITIONS OF ROBOT

 An official definition of such a robot comes from the Robot


Institute of America (RIA):
 A robot is a reprogrammable, multifunctional manipulator
designed to move material, parts, tools or specialized
devices through variable programmed motions for the
performance of a variety of tasks.
 A robot is a software controllable mechanical device that
uses sensors to guide one or more end effectors through
programmed motions in a workspace in order to manipulate
the physical objects.
 Generally a robot mean an industrial robot and also called as
a robot manipulator or a robotic arm.
 A Machine that moves (Things) from point A to point B.

 A Goal oriented machine that can Sense, Plan and Act.


Introduction - Robotics
KNOWLEDGEBASE FOR ROBOTICS DISCIPLINES

Mechanical
Engineering
 Mechanical Engineering
 Electrical Engineering Electronics Physics and
Engineering Biology
 Computer Engineering
 Electronics Engineering
 Physics and Biology
Robotics

 Mathematics
Electrical
Mathematics
Engineering

Computer
Engineering
Introduction - Robotics
INTRODUCTION - ROBOTICS
Features of a Robots:
 Throughput
 Accuracy
 Quality
 Consistency
 Reliability
 Safety

Robots will perform Things like:


 Deep sea, Space exploration
 Boring, Repeated Actions
 Dangerous, Unhealthy, Risky etc
Actions depends mainly on 3 D’s – Dirty, Dull, Dangerous
Introduction - Robotics
INTRODUCTION - ROBOTICS
ROBOTICS IN FEW AREAS
• Robot Enabled Surgery – Surgical Robots – Similar to Teleoperation
• Human Limitations are Augmented – Make us stronger – Exoskeletons
• Observing Assets – Maintenance and Inspection – Pipelines, Powerlines, Irrigation – Monitoring Robots
• Exploration – Underwater Robots – Eagle Ray – Autonomous Mode, Space – Curiosity – Mars mission

WHY ROBOTICS?
• Robots are real
• They are diverse in form and function
• Rapid increasing functionality
• Solutions of important problems of our time
Introduction - Robotics
ESSENTIAL CHARACTERISTICS
• Sensing: First of all your robot would have to be able to
sense its surroundings. Robot sensors are light sensors
(eyes), touch and pressure sensors (hands), chemical Sensing
sensors (nose), hearing and sonar sensors (ears) and taste
sensors (tongue).

• Movement: A robot needs to be able to move around its


environment. Whether rolling on wheels, walking on legs
or propelling by thrusters a robot needs to be able to move.
Robot Energy/
• Energy: A robot needs to be able to power itself. A robot Movement Characteristics Power
might be solar powered, electrically powered, battery
powered.

• Intelligence: A robot needs some kind of "smarts." This is


where programming enters the pictures. A programmer is
the person who gives the robot its 'smarts.'
Intelligence
Introduction - Robotics
SENSING CHARACTERISTICS
Introduction - Robotics

DESIGN AND OPERATION OF ROBOTICS SYSTEMS

 Dynamic system modelling and analysis.


Programming
 Feedback control – Sensors and signal conditioning.
 Actuators (muscles) and power electronics.
 Hardware/computer interfacing. Micro Controller
 Computer programming.

Actuators

Sensors
The cycle represents the flow.

Mechanical Control Electrical Electronics Computer Analysis of


Systems Systems Systems Systems System mechanisms
Introduction - Robotics
THREE LAWS OF ROBOTICS
The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov .

• A robot may not harm a human or, through inaction, allow a human to come to harm.
• A robot must obey the orders given by human beings, except when such orders conflict with the
First Law.
• A robot must protect its own existence as long as it does not conflict with the First or Second
Laws.
Introduction - Robotics
FEW ROBOTS EXAMPLES……

Manipulator Robot Wheeled Robot Legged Robot Autonomous Underwater Vehicle

Unmanned Aerial Vehicle Mars Rover Snake Robot


Introduction
Mechanisms - Robotics
ROBOTIC APPLICATIONS……

Decontaminating Robot for Welding robot for repetitive kind Scrumbat menial task robot
rescuing nuclear plants from of works
hazardous tasks
Introduction - Robotics
REASONS FOR USING ROBOTS IN INDUSTRY

• Reduce labor cost

• Eliminate dangerous jobs

• Increase output rate

• Improve product quality

• Increase product flexibility

• Reduce material waste

• Reduce labor turnover

• Reduce capital cost


Introduction - Robotics
PRESENT AND FUTURE APPLICATION
• Robots are used to perform tasks that had previously been rejected as being impossible to
undertake because of excessive personnel and time requirements.
• Robot is used for advanced nuclear reactors, where robots have to work in extremes of
temperature, humidity, and radiation level and have to climb over obstacles.
• The military is hoping to make up for its shortage of personnel by using robots to make the
human forces more efficient.
• The automated robot cook, maid, butler, gardener is still far from reality.
• Robots are currently used in education as tools for teaching various topics.
• Programmable mobile devices such as the big track tank( robot) can be used to teach
programming.
Space Robotics: Space Underwater Robotics: Electric Mobility: EO smart connecting
Climber Manipulator car

Logistics, Production SAR: Advanced Security Guard Rehabilitation Systems


and Consumer
Introduction - Robotics
ROBOTS IN INDUSTRY
• Agriculture
• Automobile
• Construction
• Entertainment
• Health care: hospitals, patient-care, surgery , research, etc.
• Laboratories: science, engineering , etc.
• Law enforcement: surveillance, patrol, etc.
• Manufacturing
• Military: demining, surveillance, attack, etc.
• Mining, excavation, and exploration
• Transportation: air, ground, rail, space, etc.
• Utilities: gas, water, and electric
• Warehouses
Introduction - Robotics
INDUSTRIAL APPLICATIONS OF ROBOTS

 Material handling

 Material transfer

 Machine loading
/unloading
 Spot welding

 Continuous arc welding

 Spray coating

 Assembly

 Inspection
Introduction - Robotics
COUNTRIES WITH ROBOTIC APPLICATIONS

• Japan

• US

• Germany

• Sweden

• France

• Great Britan

• Italy
Introduction - Robotics
SUMMARY

 Robotics - integration of computers and controlled mechanisms to make devices re-


programmable and versatile.
 Modern mathematic representations are used to plan robotic tasks and integrate sensors into
the task planning.
 Few advantages for using robots are like flexibility and performance quality.

 Down side is that robotics effects the labour pool and increases the educational requirements
for manufacturing personnel.
 Robotics are used in most industries and will be used even more in the decades to come.
Mechanisms
Mechanisms - Robotics
ROBOT PARTS

 Base
 Shoulder
 Elbow
 Wrist
 Tool-plate
 End-effectors
Mechanisms - Robotics
DEFINITIONS - TERMINOLOGY

• Robot: An electromechanical device with multiple degrees-of-freedom (dof) that is programmable to


accomplish a variety of tasks.
• Robotics: The science of robots. Humans working in this area are called roboticists.
• DOF: Degrees-of-freedom, the number of independent motions a device can make, also called
mobility.
• Manipulator: Electromechanical device capable of interacting with its environment. Anthropomorphic
designed or appearing like human beings.
• End-Effector: The tool, gripper, or other device mounted at the end of a manipulator, for accomplishing
useful tasks.
• Workspace: The volume in space that a robot‘s end-effector can reach, both in position and orientation.
• Kinematics: The study of motion without regard to forces/torques.
• Dynamics: The study of motion with regard to forces/torques.
• Actuator: Provides force/torque.
• Sensor: Reads actual variables in robot mofor robot motiontion for use in control.
Mechanisms - Robotics

LINKS, JOINTS, KINEMATIC CHAIN & DEGREE OF FREEDOM


Degrees of Freedom : Refers to the number of single-axis
rotational joints in the arm, where higher number indicates an
increased flexibility in positioning a tool.
Mechanisms - Robotics
LINKS, JOINTS, POSE
• Position: The translational (straight-line) location of an object.
• Orientation: The rotational (angular) location of an object. Orientation is measured by
roll, pitch, and yaw angles.
• Pose: Position and orientation taken together.
• Link: A rigid piece of material connecting joints in a robot.
• Joint: The device which allows relative motion between two links in a robot.

revolute (R) prismatic (P) universal (U) spherical (S)


Mechanisms - Robotics
FORWARD AND INVERSE KINEMATICS
Forward Kinematics :
Use of the kinematic equations of a robot to compute the position of the end-
effector from specified values for the joint parameters.

Inverse Kinematics :
Use of the kinematics equations to determine the joint parameters that
provide a desired position for each of the robot's end-effectors.

ROBOTIC ARM
Mechanisms - Robotics

SERIAL AND PARALLEL MANIPULATORS

Serial Manipulator Parallel Manipulator


Serial Manipulators:
• They are designed as a series of links connected by motor-actuated joints that extend from a base to an
end-effector.
• Anthropomorphic arm structure described as having a "shoulder", an "elbow", and a "wrist".

Parallel Manipulators :
• Links are arranged closely rather than open kinematic chain.
• Great structural rigidity and great accuracy can be achieved.
Mechanisms - Robotics

ROBOT CLASSIFICATION ON GEOMETRY/KINEMATIC ARRANGEMENTS


Robot Configuration Axis 1 Axis 2 Axis 3 Total Revolute
Jointed arm / Articulated Configuration Revolute Revolute Revolute 3
Polar / Spherical Configuration Revolute Revolute Prismatic 2
SCARA Configuration Revolute Revolute Prismatic 2
Cylindrical Configuration Revolute Prismatic Prismatic 1
Cartesian / Rectangular Configuration Prismatic Prismatic Prismatic 0

Spherical Configuration Articulated Configuration


Mechanisms - Robotics

ROBOT CLASSIFICATION ON GEOMETRY/KINEMATIC ARRANGEMENTS

SCARA Configuration

Cylindrical Configuration
Cartesian / Rectangular Configuration
Mechanisms - Robotics
JOINT ARM/ARTICULATED ROBOT
• Called as a revolute , or anthropomorphic manipulator
• All joints are replaced with revolute it is known as articulated – coordinate robot.
• This is the most widely used arm configuration because of its flexibility in reaching any part of the
working envelope.
• This configuration flexibility allows such complex applications as spray painting and welding to be
implemented successfully
Mechanisms - Robotics
POLAR/SPHERICAL CONFIGURATION
• Second/Elbow joint is replaced with revolute joint .
• Work envelope generates in this case is the volume between two concentric spheres.
• These robots can generate a large working envelope.
• The robots can allow large loads to be lifted.
• The semi-spherical operating volume leaves a considerable space near to the base that cannot be
reached.
• This design is used where a small number of vertical actions is adequate: the loading and unloading of
a punch press is a typical application.
Mechanisms - Robotics

SCARA (SELECTIVE COMPLIANCE ASSEMBLY ROBOT ARM)


• Similar to spherical configuration(RRP) with change in directions having all in vertical orientation.
• Although originally designed specifically for assembly work, these robots are now being used for
welding, drilling and soldering operations because of their repeatability and compactness.
• They are intended for light to medium loads and the working volume tends to be restricted as there is
limited vertical movement.
Mechanisms - Robotics
CYLINDRICAL COORDINATE SYSTEM
• First joint is revolute.
• The latter distance is given as a positive or negative number depending on which side of the reference
plane faces the point.
• They have a rigid structure, giving them the capability to lift heavy loads through a large working
envelope, but they are restricted to area close to the vertical base or the floor.
• This type of robot is relatively easy to program for loading and unloading of palletized stock, where
only the minimum number of moves is required to be programmed.
Mechanisms - Robotics
CARTESIAN/RECTANGULAR CONFIGURATION
• A Cartesian coordinate robot is an industrial robot.

• Three principal axes of control are linear are at right angles to each other.
• Due to their rigid structure they can manipulate high loads so they are commonly used for pick-
and-place operations, machine tool loading, in fact any application that uses a lot of moves in the
X,Y,Z planes.
• These robots occupy a large space, giving a low ratio of robot size to operating volume. They may
require some form of protective covering
Mechanisms - Robotics

WORKSPACE
Def - The workspace of a manipulator is the total volume swept out by the end effector as the manipulator
executes all possible motions constrained by mechanical arm.

Reachable Workspace :
Set of points reachable by the manipulators.
Dexterous Workspace :
These points manipulator can reach with an orientation of end effector arbitrary.
Singularity: Kinematic singularity is a point in the workspace where the robot loses its ability to move the
end effector in some direction no matter how it moves its joints. It typically occurs when two of the robot's
joints line up, making them redundant.

Reachable Workspace Dexterous Workspace Singularity


Mechanisms - Robotics
THE HAND OF A ROBOT
• Grippers: Generally used to grasp and hold an object and place it at a desired location.
• Tools: A robot is required to manipulate a tool to perform an operation on a workpart.
Mechanisms - Robotics
MACHINE TOOLS
Robot end-effectors can also be machine tools such as drills, grinding wheels, cutting wheels and
sanders.
• Measuring Instruments: Measuring instruments are end-effectors that allow the robot to precisely
measure parts by running the arm lightly over the part using a measuring probe or gauge.
• Laser and Water Jet Cutters : Laser and water jet cutters are robot end-effectors that use high-intensity
laser beams or high- pressure abrasive water jets to cut sheet metal or fiberglass parts to shape.
• Welding Torches : Welding torches are robot end-effectors that enable robots to weld parts together.
These end- effectors are widely used in the automotive industry.
• Spray Painting Tools : Automatic spray painting is a useful application for robots, in the automotive and
other industries.
• Tool Changers : Some robot systems are equipped with automatic tool changers to extend the
usefulness of the robot to more tasks

Drill Tool NASA Cutter/Scoop Robot Measuring System


Mechanisms - Robotics
MACHINE TOOLS

Laser-beam Tool Welding Torch

Robot Tool Changer

Glue-Applying Robot Spray-Painting Robots


Mechanisms - Robotics
ROBOT MOTION CONTROL METHODS

Point to Point:
• Discrete set of points.
• With teach pendant they will store the points.
• No control over the end effector.

Continuous Path:
• Entire path is controlled.
• Commanded to follow a contour or a straight line.
• Velocity and acceleration controlled.
• Sophisticated controllers & softwares required.
Mechanisms - Robotics
ROBOT SYSTEMS AND SPECIFICATIONS

Robot characteristics are based on some Properties:

• Number of Axes - in Numbers • Tool Orientation - Degrees

• Load Carrying Capacity - Kilograms (Kg) • Repeatability - mm

• Maximum Speed, Cycle Time - mm/sec • Precision And Accuracy - mm

• Reach And Stroke - mm • Controller Resolution

• Wrists And End Effectors


Mechanisms - Robotics
CAPACITY AND SPEED

Load carrying capacity varies depending up on the robot.


Example :
Minimover 5 microbot – 2.2 Kg
GCA – XR6 Extended - 4928 Kg

Cycle time:
The measure of time required for complete one periodic operation.

Load Carrying Robot


Mechanisms - Robotics
TECHNICAL ROBOTICS TERMS
 Speed: Speed is the amount of distance per unit time at which the robot can move, usually specified in inches
per second or meters per second. The speed is usually specified at a specific load or assuming that the robot
is carrying a fixed weight. Actual speed may vary depending upon the weight carried by the robot.

 Load Bearing Capacity: Load bearing capacity is the maximum weight-carrying capacity of the robot. Serial
robots that carry large weights, but must still be precise, are heavy and expensive, with poor (low) payload to
weight ratios.

 Accuracy: Accuracy is the ability of a robot to go to the specified position without making a mistake. It is
impossible to position a machine exactly. Accuracy is therefore defined as the ability of the robot to position
itself to the desired location with the minimal error (usually 0.001 inch).

 Repeatability: Repeatability is the ability of a robot to repeatedly position itself when asked to perform a task
multiple times. Accuracy is an absolute concept, repeatability is relative. Note that a robot that is repeatable
may not be very accurate. Likewise, an accurate robot may not be repeatable.

 Precision: Precision is the ‗fineness‘ with which a sensor can report a value. For example, a sensor that reads
2.1178 is more precise than a sensor that reads 2.1 for the same physical variable. Precision is related to
significant figures. The number of significant figures is limited to the least precise number in a system of
sensing or string of calculations.
Mechanisms - Robotics

EXAMPLE - REPEATABILITY, ACCURACY, PRECISION


This example came from real-world experience.

Three days in a row I bicycled the exact same route to and from work and I was amazed that my bike computer
recorded the following data.

TIME
DAY DISTANCE (KM) (HRS:MIN:SEC)

DAY 1 15.06 0:45:17

DAY 2 15.06 0:45:19

DAY 3 15.06 0:45:18

What is Repeatability, Accuracy, Precision on Above ?


Mechanisms - Robotics
WORK ENVELOPE & WORKCELLS
• Work envelope is the maximum robot reach, or volume within which a
robot can operate. This is usually specified as a combination of the limits of
each of the robot's parts. The figure below shows how a work-envelope of a
robot is documented. This is also called Robot Workspace.

• Robots seldom function in an isolated environment. In order to do useful


work, robots must coordinate their movements with other machines and
equipment, and possibly with humans. A group of machines/equipment
positioned with a robot or robots to do useful work is termed a Workcell.

• For example, a robot doing welding on an automotive assembly line must


coordinate with a conveyor that is moving the car-frame and a laser-
positioning / inspection robot that uses a laser beam to locate the position of
the weld and then inspect the quality of the weld when it is complete.
Mechanisms - Robotics
REACH AND STROKE
Rough Measurements of the Workspace/Work
envelope
Horizontal Reach
The maximum radial distance positioned from
vertical axis.
Horizontal Stroke
The horizontal stroke represents the total radial
distance the wrist can travel.
Mechanisms - Robotics
ROBOT POWER SOURCES/ ACTUATORS
The robot drive system and power source determine characteristics such as
speed, load-bearing capacity, accuracy, and repeatability as defined above.

• Electric motors (DC servomotors)


A robot with an electrical drive uses electric motors to position the robot.
These robots can be accurate, but are limited in their load-bearing capacity.

• Hydraulic cylinders (fluid pressure)


A robot with a hydraulic drive system is designed to carry very heavy
objects, but may not be very accurate.

• Pneumatic cylinders (air pressure) Human Arm Model with McKibben Muscles
A pneumatically-driven robot is similar to one with a hydraulic drive system;
it can carry less weight, but is more compliant (less rigid to disturbing
forces).

• McKibben Artificial Muscles (air pressure)


The McKibben artificial muscle was invented in the 1950‘s, but was too
complicated to control until the 1990‘s (computers and nonlinear controls McKibben Muscle
technology have greatly improved). Like the human muscle, these artificial
muscles can only contract, and cannot push.
Mechanisms - Robotics
WRISTS

 Wrist Joints are all revolute joints.

 Spherical Wrists – Common practice


Mechanisms - Robotics
TOOL ORIENTATIONS

• Three Major Axes – Shape of Work Envelope



• Remaining Axes (Minor Axes) – Kinds of Orientations of tool/hand.

• Mobile tool coordinate ―M‖ M = {m1,m2,m3}

• Wrist Coordinate Frame ―F‖ F= {f1,f2,f3}

Spherical Wrist

Yaw, Pitch, Roll of the Tool


Mechanisms - Robotics
END EFFECTORS AND GRIPPERS
• End-effectors are the tools attached to the
end of the robot arm that enable it to do
useful work.

• Typically, the end-effectors must be


purchased or designed separately. Also
called end-of-arm-tooling, end-effectors
are usually attached to the robot tool plate
(after the last wrist joint) via a standard
mechanical interface.

• Grippers are the most common end-


effectors. They provide the equivalent of a
thumb and an opposing finger, allowing the
robot to grasp small parts and manipulate
them.
Mechanisms - Robotics
NUMBER OF AXES

• First three axes are used to establish the position of the wrist.

• Remaining axes are used to establish the orientation of the tool/gripper.

Axes of a Robotic Manipulator


Axes Type Functions

1-3 Major Positioning the Wrist

4–6 Minor Orient the Tool

7- 11 Redundant Avoid Obstacles


Mechanisms - Robotics
ROBOT CONTROL METHODS

• All robot control methods involve a computer, robot, and sensors.

• Lead-Through Programming: The human operator physically grabs


the end-effector and shows the robot exactly what motions to make for a
task, while the computer memorizes the motions (memorizing the joint
positions, lengths and/or angles, to be played back during task
execution).

• Teach Programming: Move robot to required task positions via teach


pendant; computer memorizes these configurations and plays them
back in robot motion sequence.
The teach pendant is a controller box that allows the human operator to
position the robot by manipulating the buttons on the box. This type of
control is adequate for simple, non-intelligent tasks

Microbot with Teach Pendant


Mechanisms - Robotics
ROBOT CONTROL METHODS

• Off-Line Programming: Off-line programming is the use of


computer software with realistic graphics to plan and program
motions without the use of robot hardware (such as IGRIP).
• Autonomous: Autonomous robots are controlled by computer,
with sensor feedback, without human intervention. Computer
control is required for intelligent robot control. In this type of
control, the computer may send the robot pre-programmed
positions and even manipulate the speed and direction of the
robot as it moves, based on sensor feedback. The computer can
also communicate with other devices to guide the robot through
its tasks.
• Teleoperation: Teleoperation is human-directed motion, via a
joystick. Special joysticks that allow the human operator to feel
what the robot feels are called haptic interfaces.
• Telerobotics: Telerobotics control is a combination of
autonomous and teleoperation control of robot systems.

Force-Reflecting Teleoperation System


at Wright-Patterson AFB
Mechanisms - Robotics
CONTROLLER RESOLUTION
• This is the smallest change that can be measured by the feedback sensors, or caused by the actuators.

• The resolution of a robot is a feature determined by the design of the control unit and is mainly dependent
on the position feedback sensor.

• The programming resolution is the smallest allowable position increment in robot programs and is
referred to as the basic resolution unit (BRU).

• For example, assume that an optical encoder which emits 1000 pulses per revolution of the shaft is directly
attached to a rotary axis. This encoder will emit one pulse for each of 0.36° of angular displacement of the
shaft.

• The unit 0.36° is the control resolution of this axis of motion. Angular increments smaller than 0.36° cannot
be detected. Best performance is obtained when programming resolution is equal to control resolution. In
this case both resolutions can be replaced with one term: the system resolution
Sensing
Sensing - Robotics
SENSE - DECIDE - ACT

To make an effective Robot, it must be made to interact well with its environment and control commands .
Sensing - Robotics
PERCEPTION INTRODUCTION - CHALLENGES
• Dealing with real-world situations
• Reasoning about a situation
• Cognitive systems have to interpret situations based on uncertain and only partially available information
• They need ways to learn functional and contextual information (semantics /understanding)
Sensing - Robotics
COMMON SENSORS

• Tactile sensors or bumpers


Detection of physical contact, security switches
• GPS
Global localization and navigation
• Inertial Measurement Unit (IMU)
Orientation and acceleration of the robot
• Wheel encoders
Local motion estimation (odometry)
• Laser scanners
Obstacle avoidance, motion estimation, scene
interpretation (road detection, pedestrians)
• Cameras
Texture information, motion estimation, scene
interpretation
Sensing - Robotics

CLASSIFICATION OF SENSORS

 What:
Proprioceptive sensors
Measure values internally to the system (robot)
E.g. motor speed, wheel load, heading of the robot, battery status
Exteroceptive sensors
Information from the robots environment like
distances to objects, intensity of the ambient light,
extraction of features from the environment
 How:
Passive sensors
Measure energy coming from the environment; very much
influenced by the environment
Active sensors
Emit their proper energy and measure the reaction
better performance, but some influence on environment
Sensing - Robotics
CLASSIFICATION OF SENSORS
Sensing - Robotics
ENCODERS
Electro-mechanical Device that converts linear or angular position of a shaft to an analog or digital
signal, making it an linear/angular transducer
Sensing - Robotics
HEADING SENSORS

Definition:
Heading sensors are sensors that determine the robot‘s orientation and inclination with respect to a
given reference. These can be proprioceptive (gyroscope, accelerometer) or exteroceptive (compass,
inclinometer).

Together with an appropriate velocity information, they allow integrating the movement to a position
estimate. This procedure is called deduced reckoning (ship navigation)

Sensor types:

Compass: senses the absolute direction of the Earth magnetic field


Gyroscope: senses the relative orientation of the robot with respect to a given reference
Sensing - Robotics
ACCELEROMETER AND GYROSCOPE

• Accelerometer measures linear acceleration based on


vibration.
• Using the key principles of angular momentum, the
gyroscope helps indicate orientation.

Gyroscope Accelerometer
Sensing - Robotics
MEMS ACCECLOMETER
A spring-like structure connects the device to a seismic mass vibrating in a capacity divider that converts the
displacement into an electric signal

 Can be multi-directional
 Can sense up to 50 g

Applications

• Dynamic acceleration
• Static acceleration (inclinometer)
• Airbag sensors (+- 35 g)
• Control of video games (e.g., Nintendo Wii)
Sensing - Robotics
INERTIAL MEASUREMENT UNIT

It uses gyroscopes and accelerometers to estimate the relative position (x, y, z), orientation (roll, pitch, yaw),
velocity, and acceleration of a moving vehicle with respect to an inertial frame.

• In order to estimate the motion, the gravity vector must be subtracted and the initial velocity has to be
known.
• After long periods of operation, drifts occurs: need external reference to cancel it.
Sensing - Robotics
RANGE SENSORS

 Sonar

 Laser range finder

 Time of flight camera

 Structured light
Sensing - Robotics
RANGE SENSORS – TIME OF FLIGHT

• Large range distance measurement - thus called range sensors

• Range information - key element for localization and environment modeling

• Ultrasonic sensors as well as laser range sensors make use of propagation speed of sound or
electromagnetic waves respectively.

The traveled distance of a sound or electromagnetic wave is given by

d = distance traveled (usually round-trip)


c = speed of wave propagation
t = time of flight
Sensing - Robotics
RANGE SENSORS – TIME OF FLIGHT
Significant Features:

• Propagation speed v of sound: 0.3 m/ms


• Propagation speed v of electromagnetic signals: 0.3 m/ns,
• Electromagnetic signals travel one million times faster

3 meters - Distance

• Equivalent to 10 ms for an ultrasonic system


• Equivalent to only 10 ns for a laser range sensor
• Measuring time of flight with electromagnetic signals is not an easy task
• Laser range sensors expensive and delicate

The quality of time-of-flight range sensors mainly depends on:

 Inaccuracies in the time of flight measurement (laser range sensors)


 Opening angle of transmitted beam (especially ultrasonic range sensors)
 Interaction with the target (surface, specular reflections)
 Variation of propagation speed (sound)
 Speed of mobile robot and target (if not at stand still)
Sensing - Robotics
VISION AND RANGE SENSORS

Camera Vision
Sensing - Robotics
VELODYNE LASER RANGE FINDER
The Velodyne HDL-64E uses 64 laser emitters.

• Turn-rate up to 15 Hz
• The field of view is 360° in azimuth and 26.8° in elevation
• Angular resolution is 0.09° and 0.4° respectively
• Delivers over 1.3 million data points per second
• The distance accuracy is better than 2 cm and can measure depth up to 50 m
• This sensor was the primary means of terrain map construction and obstacle detection for all the top
DARPA 2007 Urban Challenge teams. However, the Velodyne is currently still much more expensive
than Sick laser range finders (SICK ~ 2-4000 $, Velodyne ~40- 80,000 $)
Sensing - Robotics
STRUCTURED LIGHT-KINECT SENSOR
Major components:

• IR Projector
• IR Camera
• VGA Camera
• Microphone Array
• Motorized Tilt
Sensing - Robotics
ROBOTIC SENSORS
• Position sensors: are used to monitor the position of joints.
• Range sensors: measure distances from the reference point to other points of importance.
• Velocity sensors: are used to estimate the speed with which a manipulator is moved.
Sensing - Robotics
SENSORS MOUNTED ROBOTS

Big Dog robot by Boston Dynamics Foster-Miller TALON Robot


Sensing - Robotics
ROBOT SENSORS
Robots under computer control interact with a variety of sensors, which are small electronic or electro-
mechanical components that allow the robot to react to its environment. Some common sensors are
described below.
• Vision: A vision system has a computer-controlled camera that allows the robot to see its environment and
adjust its motion accordingly. Used commonly in electronics assembly to place expensive circuit chips
accurately through holes in the circuit boards. Note that the camera is actually under computer control and
the computer sends the signals to the robot based upon what it sees.
• Voice: Voice systems allow the control of the robots using voice commands. This is useful in training robots
when the trainer has to manipulate other objects.
• Tactile: Tactile sensors provide the robot with the ability to touch and feel. These sensors are used for
measuring applications and interacting gently with the environment.
Haptics: From the Greek, meaning to touch. Haptic interfaces give human operators the sense of touch and
forces from the computer, either in virtual or real, remote environments. Also called force reflection in
telerobotics.

Vision Camera Voice Detection Tactile Sensor


Sensing - Robotics
ROBOT SENSORS
• Force/Pressure: Force/pressure sensors provide the robot with a sense of the force being applied on
the arm and the direction of the force. These sensors are used to help the robot auto-correct for
misalignments, or to sense the distribution of loads on irregular geometry. Can also measure torques, or
moments, which are forces acting through a distance. Can be used in conjunction with haptic interfaces
to allow the human operator to feel what the robot is exerting on the environment during teleoperation
tasks.
• Limit Switches: Limit switches may be installed at end-of-motion areas in the workspace to
automatically stop the robot or reverse its direction when a move out-of-bounds is attempted; again,
used to avoid collisions.
• Other Sensors:
• Encoder - Measures Angle
• Potentiometer - Measures Angle Or Length
• LVDT - Measures Length (Linear Variable Displacement Transducer)
• Strain Gauge - Measures Deflection
• Ultrasonic Sensor - Measures Distance
• Infrared Sensor - Measures Distance
• Light Sensor - Detects Presence
Sensing - Robotics
ROBOT SENSORS

Encoder
Potentiometer IR
Sensors

LVDT

Strain Gauge Light Sensor

Ultrasonic Sensor
Actuation
Actuation - Robotics
CLASSIFICATION OF ACTUATORS
Hydraulic Actuators :
• A hydraulic actuator consists of cylinder or fluid motor that uses hydraulic power to facilitate mechanical
operation.
• The mechanical motion gives an output in terms of linear, rotatory or oscillatory motion.

Pneumatic Actuators :
• A pneumatic actuator converts energy formed by vacuum or compressed air at high pressure into either
linear or rotary motion.

Electric Actuators :
• An electric actuator is powered by a motor that converts electrical energy into mechanical torque.
• The electrical energy is used to actuate equipment such as multi-turn valves.
• It is one of the cleanest and most readily available forms of actuator because it does not directly involve oil
or other fossil fuels.
Actuation - Robotics
HYDRAULIC ACTUATORS
• Hydraulic actuators are rugged and suited for high-force applications.
• A hydraulic actuator can hold force and torque constant without the pump supplying more fluid or
pressure due to the incompressibility of fluids.
• High Power.

Hydraulic Cylinder
Robot using Hydraulic actuation
Actuation - Robotics
PNEUMATIC ACTUATORS
• Pneumatic actuators generate precise linear motion by providing accuracy.
• Pneumatic actuators typical applications involve areas of extreme temperatures.
• Simplicity.

Robotic Palm using Pneumatic actuation

Pneumatic Actuator Cylinder


Actuation - Robotics
ELECTRIC ACTUATORS
• Simple, safe and clean movement with accurate and smooth motion control.
• Low operating cost.
• Quiet, clean, non-toxic and energy efficient.

Solenoid Actuator Robotic Arm using Servo motors.


Control
Control - Robotics
WHAT IS ROBOT CONTROL?
Robot control is studying how to make a robot manipulator perform a task.

Control design may be divided roughly in the following steps:


• Physical System
• Modelling
• Control Specifications

• Position, Velocity and Acceleration to all joints are


input.
• System is Robot Arm.
• Force / Torque computed and applied to joints.

Robotic Arm Control in Joint-Space


Control - Robotics
PHYSICAL SYSTEM

• For this robot, the outputs are the positions and joint velocities of the end effector.
• The input variables, are basically the torques and forces τ.
Control - Robotics
MODELING
The system‘s mathematical model is obtained typically via one of the two following techniques :

 Analytic : Physics laws of the system‘s motion.


 Experimental : Experimental data collected from the system itself.

• The dynamic model of robot manipulators is derived in the analytic form using basically the laws of
mechanics.
• Model is an n DOF system (multivariable nonlinear system).

Some interesting topics related with modelling are:

 Robustness : Faculty of a control system to cope with errors due to neglected dynamics.
 Parametric identification : The objective is to obtain the numeric values of different physical
parameters.
Control - Robotics
CONTROL SPECIFICATIONS
Stability :
Consists in the property of a system by which it goes on working at certain regime or ‗closely‘ to it ‘for ever‘.
 Lyapunov stability theory.
 Input-Output stability theory.

Motion tracking :
Tracking control in joint coordinates.
 Point to Point motion.
 Trajectory (Continuous) motion.
Intelligence
Intelligence - Robotics
THREE WAVES OF ARTIFICIAL - DARPHA
Intelligence - Robotics
FIRST WAVE OF AI
Intelligence - Robotics
FEATURES OF FIRST WAVE IN AI
Intelligence - Robotics
SELF DRIVING CARS CHALLENGE
Intelligence - Robotics
SECOND WAVE OF AI
Intelligence - Robotics
SECOND WAVE OF AI
Intelligence - Robotics
SECOND WAVE - NATURAL DATA - MANIFOLDS
Intelligence - Robotics
MANIFOLDS FEATURE
Intelligence - Robotics
MANIFOLDS- CONCEPT
Intelligence - Robotics
NEURAL NETS
Intelligence - Robotics
ERRORS – NEURAL NETWORKS
Intelligence - Robotics
APPLICATIONS
Intelligence - Robotics
CHALLENGES - SECOND WAVE OF AI
Intelligence - Robotics
THIRD WAVE OF AI
Intelligence - Robotics
THIRD WAVE OF AI
Intelligence - Robotics
MODEL GENERATION
Intelligence - Robotics
THIRD WAVE OF AI
Intelligence - Robotics
ARTIFICIAL INTELLIGENCE
Artificial Intelligence (AI) is a way of making a computer, a computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent humans think.
Areas of AI :
 Natural language processing : to communicate with humans.
 Knowledge representation : to store and retrieve information.
 Automated reasoning : to use the stored information to take decisions and to draw new conclusions.
 Machine learning : to adapt to new circumstances, to detect and extrapolate patterns.
 Vision : To identify objects in the environment.
Intelligence - Robotics
NATURAL LANGUAGE PROCESSING
Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics
concerned with the interactions between computers and human (natural) languages.

Application Areas of NLP :


● Machine translation : Automatically translate text from one human language to another.
● Optical Character Recognition (OCR) : Given an image representing printed text, determine the corresponding text.
● Question Answering : Given a human-language question, determine its answer.
● Automatic Summarization : Produce a readable summary of a chunk of text.
● Speech Recognition : Given a sound clip of a person or people speaking, determine the textual representation of the
speech.
Intelligence - Robotics

KNOWLEDGE REPRESENTATION AND AUTOMATED REASONING


 Knowledge is a theoretical or practical understanding of a subject or a domain.
 Reasoning is the ability to make inferences, and automated reasoning is concerned with the building of computing
systems that automate this process.
 Anyone can be considered a domain expert if he or she has deep knowledge (of both facts and rules) and strong
practical experience in a particular domain.
Rules as a knowledge representation technique
 Any rule consists of two parts: the IF part, called the antecedent (premise or condition) and the THEN part called the
consequent (conclusion or action).
 The basic syntax of a rule is:
IF <antecedent>
THEN <consequent>
 Expert systems are rule-based reasoning systems.
Intelligence - Robotics
MACHINE LEARNING

Machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly
programmed.

Approaches :
 Artificial Neural Networks : Computations are structured in terms of an interconnected group of artificial neurons,
processing information using a connectionist approach to computation.
 Deep learning : Consists of multiple hidden layers in an artificial neural network. This approach tries to model the
way the human brain processes light and sound into vision and hearing.
 Reinforcement learning : It is concerned with how an agent ought to take actions in an environment so as to
maximize some notion of long-term reward.
 Genetic algorithms : A search heuristic that mimics the process of natural selection, and uses methods such as
mutation and crossover to generate new genotype in the hope of finding good solutions to a given problem.
 Support vector machines : Support vector machines (SVMs) are a set of related supervised learning methods used
for classification and regression.
Intelligence - Robotics
ARTIFICIAL NEURAL NETWORK (ANN)
 A computing system made up of a number of simple, highly interconnected processing elements, which
process information by their dynamic state response to external inputs.
 Highly used in classification, object detection problems.
 It can have any number of hidden layers depend on application.
 Input is set of parameters and Output is classification weights while output of hidden layer is unknown.

Biological vs Artificial Neuron Artificial Neural Network


Intelligence - Robotics
VISION
Robot Vision involves using a combination of camera hardware and computer algorithms to allow robots to
process visual data from the world.
 Visual Servoing involves controlling the motion of a robot by using the feedback of the robot's position as
detected by a vision sensor.
 Convolutional Neural Network is a feed-forward artificial neural network in which the connectivity pattern
between its neurons is inspired by the organization of the animal visual cortex. Convolutional neural networks
are often used in image recognition systems. They have achieved an error rate of 0.23 percent.

Convolutional Neural Network based Object Detection


Vision System
Vision - Robotics
COMPUTER VISION SYSTEM
 Vision is our most powerful sense in aiding our perception of the 3D world around us.
 Retina is ~10cm2. Contains millions of photoreceptors
(120 mil. rods and 7 mil. Cones for colour sampling)
 Provides enormous amount of information: data-rate of ~3 GBytes/s
 A large proportion of our brain power is dedicated to processing the signals from our eyes
 Our visual system is very sophisticated
 Humans can interpret images successfully under a wide range of conditions – even in the presence of
very limited cues
Vision - Robotics
COMPUTER VISION
• Automatic extraction of ―meaningful‖ information from images and videos – varies depending on the application
Vision - Robotics
WHY IS IT HARD?

• Half of primate cerebral cortex is devoted to visual processing


• Achieving human-level visual perception is probably ―AI-complete‖
Vision - Robotics
COMPUTER VISION : CHALLENGES

• Viewpoint changes

• Illumination changes

• Object intra-class variations

• Inherent ambiguities:
many different 3D scenes can give rise to a particular 2D picture
Vision - Robotics
COMPUTER VISION | APPLICATIONS

 3D reconstruction and modeling

 Recognition

 Motion capture

 Augmented reality

 Video games and tele-operation

 Robot navigation and automotive

 Medical imaging
Vision - Robotics
COMPUTER VISION FOR ROBOTICS
 Enormous descriptability of images a lot of data to process (human vision involves 60 billion neurons!)

 Vision provides humans with a great deal of useful cues to explore the power of vision towards intelligent robots
Cameras:
• Vision is increasingly popular as a sensing modality:
• descriptive
• compactness, compatibility
• low cost
• HW advances necessary to support the processing of
images
Vision - Robotics
THE CAMERA | IMAGE FORMATION

• If we place a piece of film in front of an object, do we get a reasonable image?


• Add a barrier to block off most of the rays
• This reduces blurring
• The opening is known as the aperture
Vision - Robotics
THE PINHOLE CAMERA MODEL

Pinhole model:
 Captures beam of rays – all rays through a single point (note: no lens!)
 The point is called Center of Projection or Optical Center
 The image is formed on the Image Plane
 We will use the pinhole camera model to describe how the image is formed
Vision - Robotics
WHY USE A LENS?
The ideal pinhole:
only one ray of light reaches each point on the film a
 image can be very dim; gives rise to diffraction effects
Making the pinhole bigger (i.e. aperture) makes the image blurry
One ray Pass Blur Image with
large aperture

 A lens focuses light onto the film


 Rays passing through the optical center are not deviated

• A lens focuses light onto the film


• Rays passing through the optical center are not deviated
• All rays parallel to the optical axis converge at the focal point
Vision - Robotics
HOW TO CREATE A FOCUSED IMAGE?

Find a relationship between f, z and e


Vision - Robotics
PRESPECTIVE PROJECTIONS
• When an object is viewed from different directions and at different distances, the appearance of the object
will be different. Such view is called perspective view.

• Perspective projections mimic what the human eyes see.


Vision - Robotics
PRESPECTIVE PROJECTIONS
The size of the perspective view depends of the Position of picture plane relative to the object.

• When the object is placed behind the picture plane, the perspective will show object reduced in size.
• When the object is placed in front of the picture plane, the perspective will show object enlarged in
size.
• When the picture plane coincides with the object, The perspective will show the true size of the object.
Vision - Robotics
DISTORTION AND RADIAL DISTORTION
In geometric optics, distortion is a deviation from rectilinear projection, a projection in which straight
lines in a scene remain straight in an image. It is a form of optical aberration.

The camera matrix does not account for lens distortion as an ideal pinhole camera does not have a lens.
To accurately represent a real camera, the camera model includes the radial and tangential lens
distortion.

Radial Distortion
Radial distortion occurs when light rays bend more near the edges of a lens than they do at its optical
center. The smaller the lens, the greater the distortion.
Vision - Robotics
TANGENTIAL DISTORTION
Tangential distortion occurs when the lens and the image plane are not parallel. The tangential distortion
coefficients model this type of distortion.
Vision - Robotics
TYPES OF DISTORTION
• Barrel distortion: When straight lines are curved inwards in a shape of a barrel, this type of aberration is called ―barrel
distortion‖. Commonly seen on wide angle lenses, barrel distortion happens because the field of view of the lens is
much wider than the size of the image sensor and hence it needs to be ―squeezed‖ to fit. As a result, straight lines are
visibly curved inwards, especially towards the extreme edges of the frame.
• Pincushion distortion: Pincushion distortion is the exact opposite of barrel distortion – straight lines are curved
outwards from the center. This type of distortion is commonly seen on telephoto lenses, and it occurs due to image
magnification increasing towards the edges of the frame from the optical axis. This time, the field of view is smaller
than the size of the image sensor and it thus needs to be ―stretched‖ to fit. As a result, straight lines appear to be
pulled upwards in the corners.
• Mustache distortion: A mixture of both types, sometimes referred to as mustache distortion (moustache distortion) or
complex distortion, is less common but not rare. It starts out as barrel distortion close to the image center and
gradually turns into pincushion distortion towards the image periphery, making horizontal lines in the top half of the
frame look like a handlebar mustache. Its characteristics are indeed complex and can be quite painful to deal with.
While this type of distortion can be potentially fixed, it often requires specialized software.
.

Barrel Distortion Pincushion Distortion Mustache Distortion


Vision - Robotics
DISTORTION
Vision - Robotics
CAMERA CALIBRATION
Camera calibration: is the process of estimating parameters
of the camera using images of a special calibration pattern.
The parameters include camera intrinsics, distortion
coefficients, and camera extrinsics.

• Use these camera parameters to remove lens distortion


effects from an image, measure planar objects,
reconstruct 3-D scenes from multiple cameras, and
perform other computer vision applications.

Geometric camera calibration, also referred to as camera


resectioning, estimates the parameters of a lens and image
sensor of an image or video camera. These parameters are
used for correcting lens distortion, measure the size of an
object in world units, or determine the location of the
camera in the scene.

These tasks are used in applications such as machine vision


to detect and measure objects. They are also used in
robotics, for navigation systems, and 3-D scene
reconstruction.
Vision - Robotics
PINHOLE CAMERA MODEL
The pinhole camera parameters are represented in a 4-by-3 matrix called the camera matrix. This matrix
maps the 3-D world scene into the image plane. The calibration algorithm calculates the camera matrix
using the extrinsic and intrinsic parameters. The extrinsic parameters represent the location of the camera
in the 3-D scene. The intrinsic parameters represent the optical center and focal length of the camera.

The world points are transformed to camera coordinates using the extrinsics parameters. The camera
coordinates are mapped into the image plane using the intrinsics parameters.
Vision - Robotics
CAMERA CALIBRATION PARAMETERS

The calibration algorithm calculates the camera matrix


using the extrinsic and intrinsic parameters. The
extrinsic parameters represent a rigid transformation
from 3-D world coordinate system to the 3-D camera's
coordinate system. The intrinsic parameters represent a
projective transformation from the 3-D camera's
coordinates into the 2-D image coordinates.

Extrinsic Parameters
The extrinsic parameters consist of a rotation, R, and a
translation, t. The origin of the camera's coordinate
system is at its optical center and its x- and y-axis define
the image plane.

Intrinsic Parameters
The intrinsic parameters include the focal length, the
optical center, also known as the principal point, and
the skew coefficient.
Vision - Robotics
COMPUTER STEREO VISION
Computer stereo vision is the extraction of 3D information from digital
images, such as obtained by a CCD camera.
By comparing information about a scene from two vantage points, 3D
information can be extracted by examination of the relative positions
of objects in the two panels. This is similar to the biological process
Stereopsis.
In traditional stereo vision, two cameras, displaced horizontally from
one another are used to obtain two differing views on a scene, in a
manner similar to human binocular vision.
By comparing these two images, the relative depth information can be
obtained in the form of a disparity map, which encodes the difference Diagram describing relationship of image
in horizontal coordinates of corresponding image points. The values in displacement to depth with stereoscopic
images, assuming flat co-planar images.
this disparity map are inversely proportional to the scene depth at the
corresponding pixel location.
Vision - Robotics
PRE-PROCESSING STEPS-STEREO VISION
For a human to compare the two images, they must be superimposed in a stereoscopic device, with the
image from the right camera being shown to the observer's right eye and from the left one to the left eye.

In a computer vision system, several pre-processing steps are required.

• The image must first be undistorted, such that barrel distortion and tangential distortion are removed.
This ensures that the observed image matches the projection of an ideal pinhole camera.

• The image must be projected back to a common plane to allow comparison of the image pairs, known as
image rectification.

• An information measure which compares the two images is minimized. This gives the best estimate of
the position of features in the two images, and creates a disparity map.

• Optionally, the received disparity map is projected into a 3d point cloud. By utilizing the cameras'
projective parameters, the point cloud can be computed such that it provides measurements at a known
scale.
Vision - Robotics
IMAGE IN STEREO VISION
Vision - Robotics
DISPARITY
Disparity refers to the distance between two corresponding points
in the left and right image of a stereo pair. If you look at the image
below you see a labelled point X (ignore X1, X2 & X3). By following
the dotted line from X to OL you see the intersection point with the
left hand plane at XL. The same principal applies with the right-
hand image plane.

If X projects to a point in the left frame XL = (u,v) and to the right


frame at XR = (p,q) you can find the disparity for this point as the
magnitude of the vector between (u,v) and (p,q). Obviously this
process involves choosing a point in the left hand frame and then
finding its match (often called the corresponding point) in the right
hand image; often this is a particularly difficult task to do without
making a lot of mistakes.

Disparity Map/Image
If you were to perform this matching process for every pixel in the
left hand image, finding its match in the right hand frame and
computing the distance between them you would end up with an
image where every pixel contained the distance/disparity value
for that pixel in the left image.
Vision - Robotics
STRUCTURE FROM MOTION
Structure from motion (SfM) is a photogrammetric range imaging technique for estimating three-
dimensional structures from two-dimensional image sequences that may be coupled with local motion
signals. It is studied in the fields of computer vision and visual perception.

In biological vision, SfM refers to the phenomenon by which humans (and other living creatures) can
recover 3D structure from the projected 2D (retinal) motion field of a moving object or scene.
Motion Planning
Motion Planning - Robotics
MOTION PLANNING

Goals:
• Collision-free trajectories.
• Robot should reach the goal location as fast as possible.

Dynamic Environments:
How to react to unforeseen obstacles?
• Efficiency
• Reliability

Motion planning (also known as the navigation problem or the piano


mover's problem) is a term used in robotics for the process of breaking
down a desired movement task into discrete motions that satisfy movement
constraints and possibly optimize some aspect of the movement.
Motion Planning - Robotics
EXAMPLE & APPLICATION
Example, consider navigating a mobile robot inside a building
to a distant waypoint. It should execute this task while avoiding
walls and not falling down stairs. A motion planning algorithm
would take a description of these tasks as input, and produce
the speed and turning commands sent to the robot's wheels.

Motion planning algorithms might address robots with a larger


number of joints (e.g., industrial manipulators), more complex
tasks (e.g. manipulation of objects), different constraints (e.g., a
car that can only drive forward), and uncertainty (e.g. imperfect
models of the environment or robot).

Motion planning has several robotics applications, such as


autonomy, automation, and robot design in CAD software, as
well as applications in other fields, such as animating digital
characters, video game artificial intelligence, architectural
design, robotic surgery, and the study of biological molecules.
Motion Planning - Robotics
CONFIGURATION SPACE
A basic motion planning problem is to produce a continuous motion that connects a start configuration S and a
goal configuration G, while avoiding collision with known obstacles.
The robot and obstacle geometry is described in a 2D or 3D workspace, while the motion is represented as a
path in (possibly higher-dimensional) configuration space..

Configuration Space: A configuration describes the pose of the robot, and the configuration space C is the set
of all possible configurations.
For example: If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C
is a plane, and a configuration can be represented using two parameters (x, y).
If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimensional. However, C is the
special Euclidean group SE(2) = R2 SO(2) (where SO(2) is the special orthogonal group of 2D rotations), and
a configuration can be represented using 3 parameters (x, y, θ).
If the robot is a solid 3D shape that can translate and rotate, the workspace is 3-dimensional, but C is the
special Euclidean group SE(3)= R3 SO(3), and a configuration requires 6 parameters: (x, y, z) for translation,
and Euler angles (α, β, γ).
If the robot is a fixed-base manipulator with N revolute joints (and no closed-loops), C is N-dimensional.

Configuration space of a point-sized robot. White = Cfree, gray = Cobs.


Workspace
Motion Planning - Robotics
FREE SPACE AND TARGET SPACE
Free space:
• The set of configurations that avoids collision with obstacles is called the
free space Cfree.

• The complement of Cfree in C is called the obstacle or forbidden region.

Target space:
• Target space is a linear subspace of free space which denotes where we
want the robot to move to.

• In global motion planning, target space is observable by the robot's


sensors. However, in local motion planning, the robot cannot observe the
target space in some states.

• To solve this problem, the robot goes through several virtual target
spaces, each of which is located within the observable area (around the
robot). A virtual target space is called a sub-goal.
Motion Planning - Robotics
GRID-BASED ALGORITHM
Grid-based search:

Grid-based approaches overlay a grid on configuration


space, and assume each configuration is identified with a
grid point.

At each grid point, the robot is allowed to move to adjacent


grid points as long as the line between them is completely
contained within Cfree (this is tested with collision
detection).

This discretizes the set of actions, and search algorithms (like


A*) are used to find a path from the start to the goal.

These approaches require setting a grid resolution. Search is


faster with coarser grids, but the algorithm will fail to find
paths through narrow portions of Cfree. Furthermore, the
number of points on the grid grows exponentially in the
configuration space dimension, which make them
inappropriate for high-dimensional problems.
Motion Planning - Robotics
REWARD-BASED ALGORITHMS
Reward-based algorithms:

Reward-based algorithms assume that the robot in each state (position and internal state, including
direction) can choose between different actions (motion).

However, the result of each action is not definite. In other words, outcomes (displacement) are partly
random and partly under the control of the robot. The robot gets positive reward when it reaches the
target and gets negative reward if it collides with an obstacle.

These algorithms try to find a path which maximizes cumulative future rewards. The Markov decision
process (MDP) is a popular mathematical framework that is used in many reward-based algorithms.

The advantage of MDPs over other reward-based algorithms is that they generate the optimal path.

The disadvantage of MDPs is that they limit the robot to choose from a finite set of actions. Therefore, the
path is not smooth (similar to grid-based approaches). Fuzzy Markov decision processes (FDMPs) are an
extension of MDPs which generate smooth paths using a fuzzy inference system.
Motion Planning - Robotics

ARTIFICIAL POTENTIAL FIELDS & SAMPLING-BASED ALGORITHMS


Artificial potential fields:

One approach is to treat the robot's configuration as a point (usually electron) in a potential field that
combines attraction to the goal, and repulsion from obstacles.

The resulting trajectory is output as the path. This approach has advantages in that the trajectory is
produced with little computation. However, they can become trapped in local minima of the potential field,
and fail to find a path.

Sampling-based algorithms:

Sampling-based algorithms represent the configuration space with a roadmap of sampled configurations.

A basic algorithm samples N configurations in C, and retains those in Cfree to use as milestones. A
roadmap is then constructed that connects two milestones P and Q if the line segment PQ is completely in
Cfree. Again, collision detection is used to test inclusion in Cfree. To find a path that connects S and G, they
are added to the roadmap. If a path in the roadmap links S and G, the planner succeeds, and returns that
path. If not, the reason is not definitive: either there is no path in Cfree, or the planner did not sample
enough milestones.

Das könnte Ihnen auch gefallen