Sie sind auf Seite 1von 19

Contents

1. Who am I?
2. Laws of Robotics
3. Sixth sense & AI
4. Rise of machines
 4 Generation of Robots

5. Robot systems
 Controller
 Sensor
 Transformation MATRIX

6. Advantages
7. Impacts of Robotics on
society
8. Termination
Abstract
ROBOT---Mechatronic device consists of Brain
(computer) and sensors

and mechanical parts. There are four laws to be followed for


Robo implimentation

Robots predict like Human by applying ARRTIFICIAL


INTELLIGENCE concept to them.

Then they can think like Humans that is they acquire SIXTH
SENSE.

RISE OF MACHINES that is ROBOT has undergone four types of


step by step Generation.

Robot is a combination of many systems such as Controller,


Mobility, Sensor etc…

The robot Hands are moved using the MATRIX Transformation


techniques.

Robots have advantage over many fields such as medical,


space, agriculture etc…

There are some dangerous things may happen by robots when


they do dangerous jobs.

Robot –The TERMINATOR which can terminate the given job


without failure.
"Robotics is a tool for learning through experience and discovery."
- Jeneva Westendorf

Introduction to Robots
"We don't remember a world without robots. For us, a robot is a
robot. Gears and metal; electricity and positrons. Mind and iron! Human-
made! If necessary, human-destroyed! But we haven't worked with them, so
we don't know them. They're a cleaner better breed than we are."

Who am I?
 The word "robot' was coined by Karel Capek
who wrote a play entitled "R.U.R." or
"Rossum's Universal Robots" back in 1921.
The base for this word comes from the
Czech word 'robotnik' which means 'worker'.
 But even before that the Greeks made
movable statues that were the beginnings of
what we would call robots.
 For the most part, the word "Robot" today means any man-
made machine that can perform work or other
actions normally performed by Humans.

A robot is a machine that imitates the actions or


appearance of an intelligent creature- usually a
human & gathers information about its environment
(senses) and uses that information (thinks) to follow
instructions to do work (acts)

To qualify as a robot, a machine has to be able to do two things:

1) Get information from its surroundings

2) Do something physical–such as move or manipulate objects.

This is the working definition of robots that Robotics exhibit


developers used for this exhibit. Today technology is changing at incredible rates making
the identification of a robot somewhat difficult. Things that we use everyday incorporate
features beyond those of early robots.

Robots are ideal for jobs that require repetitive, precise movements.
Human workers need a safe working environment, salaries, breaks,
food and sleep. Robots don’t. Human workers get bored doing the
same thing over and over,which can lead to fatigue and costly
mistakes. Robots don’t get bored.

What are Robots Made Of?


Robots have 3 main components:

• Brain - usually a computer


• Actuators and mechanical parts - motors, pistons, grippers, wheels, gears
• Sensors - vision, sound, temperature, motion, light, touch, etc.

With these three components, robots can interact and affect their
environment to become useful.

What Do Robots Do?


Most robots today are used in factories to build
products such as cars and electronics. Others are used to explore underwater
and even on other planets.

Laws of Robotics
Laws of Robotics:
# A robot must not injure a human being or, through inaction, allow a human being to
come to harm.
# A robot must always obey orders given to it by a human being,
except where it would conflict with the first law.
# A robot must protect it's own existence, except where it would
conflict with the first or second law.
Later, this "Zeroth Law" is added:
# A robot must not injure a humananity or, through inaction, allow a humananity to come
to harm.

Once the robots are used to fight wars, they turn on their human
owners and take over the world.

SIXTH SENSE:
ROBOT SENSING:
The use of external sensing mechanisms allows a robot to interact with its
environment in a flexible manner. This is in contrast to preprogrammed operations in
which a robot is “taught” to perform repretitive tasks via a set of preprogrammed
functions.
The use of sensing technology to endow machines with a greater degree of
intelligence in dealing with their environment is indeed an active topic of research and
development in the robotics field.
The function of robot sensors may be divided into two principal categories:
1. Internal state.
2. External State.
Internal state sensors deal with the detection of variables such as
arm joint position, which are used for robot control.
External state sensors, on the other hand, deal with the detection of
variables such as range, proximity and touch.
External sensing is used for robot guidance as well as for object
identification and handling. Although proximity,, touch, vision is recognized as the most
powerful of robot sensory capabilities, Robot vision may be defined as the process of
extraction, characterizing, and interpreting information from images of a three-
dimensional world. The process, also commonly referred to as machine or computer
vision, may be subdivided into six principal areas:
1. Sensing.
2. Preprocessing.
3. Segmentation.
4. Description.
5. Recognition.
6. Interpretation.
It is convenient to group these various areas of vision according to the
sophistication involved in their implementation. We consider three levels of processing:
low, medium and high level vision.
Here, we shall treat sensing and preprocessing as low-level vision
functions. This will take us from the image formation process itself to compensations
such as noise reduction, and finally to the extraction of primitive image features such as
intensity discontinuities.

By 2050 robot "brains" based on computers that execute 100 trillion


instructions per second will start rivaling human intelligence

In terms of our six subdivisions, we will treat segmentation, description, and


recognition of individual objects as medium-level vision functions.
High-level visions refer to processes that attempt to emulate cognition.
Artificial Intelligence:

 Sensing
Learn about human and robotic vision systems, explore three modes
of robotic sensing, and apply a robotic sensory mode to a specific task.

 Thinking
Human thinking (heuristic) and robotic thinking (algorithmic) will be
explored as you gain an understanding of why a robot needs specific
instructions.

Rise of the Robots


Nervous Tissue and Computation

If we accept that computers will eventually be


come powerful enough to simulate the mind, the question that
naturally arises is: What processing rate will be necessary to yield
performance on a parallel with the human brain?

By comparing how fast the neural circuits in the


retina perform image-processing operations with how many
instructions per second it takes a computer to accomplish similar work,
believe it is possible to at least coarsely estimate the information-
processing power of nervous tissue and by extrapolation, that of the
entire human nervous system.

Motion detection in retina, if performed by


efficient software, requires the execution of at least 100 computer
instructions. Thus, to accomplish the retina's 10 million detections per
second would require at least 1,000 MIPS.

The entire human brain is about 75,000 times


heavier than the 0.02 gram of processing circuitry in the retina, which
implies that it would take, in round numbers, 100 million MIPS (100
trillion instructions per second) to emulate the 1,500-gram human
brain. Personal computers in 1999 beat certain insects but lose to the
human retina and even to the 0.1-gram brain of a goldfish. A typical PC
would have to be at least a million times more powerful to perform like
a human brain.

Certain dangerous jobs are best done by robots. Guided remotely using
video cameras, the Mini-Andros can investigate–and defuse–bombs.

A Sense of Space

Robots that chart their own routes emerged from


laboratories worldwide in the mid 1990s, as microprocessors reached
100 MIPS. Most build two-dimensional maps from sonar or laser
rangefinder scans to locate and route themselves, and the best seem
able to navigate office hallways for days before becoming disoriented.

Too often different locations in the coarse maps


resemble one another. Conversely, the same location, scanned at
different heights, looks different, or small obstacles or awkward
protrusions are overlooked. But sensors, computers and techniques are
improving, and success is in sight.

Fast Replay

Income and experience from spatially aware


industrial robots would set the stage for smarter yet cheaper ($1,000
rather than $10,000) consumer products, starting probably with small
robot vacuum cleaners that automatically learn their way around a
home, explore unoccupied rooms and clean whenever needed.

Commercial success will provoke competition and


accelerate investment in manufacturing, engineering and research.

Vacuuming robots ought to beget smarter


cleaning robots with dusting, scrubbing and picking-up arms, followed
by larger multifunction utility robots with stronger, more dexterous
arms and better sensors.

Programs will be written to make such machines


pick up clutter, store, retrieve and deliver things, take inventory, guard
homes, open doors, mow lawns, play games and so on.

New applications will expand the market and spur


further advances when robots fall short in acuity, precision, strength,
reach, dexterity, skill or processing power. Capability, numbers sold,
engineering and manufacturing quality, and cost-effectiveness will
increase in a mutually reinforcing spiral.

Perhaps by 2010 the process will have produced


the first broadly competent "universal robots," as big as people but
with lizard like 5,000-MIPS minds that can be programmed for almost
any simple chore.

Robots also can go into dangerously polluted environments, like chemical spills or
radioactive "hot zones" in nuclear power plants.

A first-generation universal robots will


handle only contingencies explicitly covered in their application
programs. Unable to adapt to changing circumstances, they will often
perform inefficiently or not at all. Still, so much physical work awaits
them in businesses, streets, fields and homes that robotics could begin
to overtake pure information technology commercially.

A second generation of universal robot


with a mouselike 100,000 MIPS will adapt as the first generation does
not and will even be trainable. Besides application programs, such
robots would host a suite of software "conditioning modules" that
would generate positive and negative reinforcement signals in
predefined circumstances.

For example, doing jobs fast and keeping its batteries


charged will be positive; hitting or breaking something will be negative.
There will be other ways to accomplish each stage of an application
program, from the minutely specific (grasp the handle underhand or
overhand) to the broadly general (work indoors or outdoors). As jobs
are repeated, alternatives that result in positive reinforcement will be
favored, those with negative outcomes shunned. Slowly but surely,
second-generation robots will work increasingly well.

A third generation of robots to learn very


quickly from mental rehearsals in simulations that model physical,
cultural and psychological factors. Physical properties include shape,
weight, strength, texture and appearance of things, and how to handle
them. Cultural aspects include a thing's name, value, proper location
and purpose.

Psychological factors, applied to humans and robots


alike include goals, beliefs, feelings and preferences. Developing the
simulators will be a huge undertaking involving thousands of
programmers and experience-gathering robots.

The simulation would track external events and tune


its models to keep them faithful to reality. It would let a robot learn a
skill by imitation and afford a kind of consciousness. Asked why there
are candles on the table, a third-generation robot might consult its
simulation of house, owner and self to reply that it put them there
because its owner likes candlelit dinners and it likes to please its
owner. Further queries would elicit more details about a simple inner
mental life concerned only with concrete situations and people in its
work area.

Fourth-generation universal robots with


a humanlike 100 million MIPS will be able to abstract and generalize.
They will result from melding powerful reasoning programs to third-
generation machines. These reasoning programs will be the far more
sophisticated descendants of today's theorem provers and expert
systems, which mimic human reasoning to make medical diagnoses,
schedule routes, make financial decisions, configure computer
systems, and analyze seismic data to locate oil deposits and so on.

Properly educated, the resulting robots will be


come quite formidable. In fact, I am sure they will outperform us in any
conceivable area of endeavor, intellectual or physical. Inevitably, such
a development will lead to a fundamental restructuring of our society.
Entire corporations will exist without any human employees or
investors at all. Humans will play a pivotal role in formulating the
intricate complex of laws that will govern corporate behavior.

Ultimately, though, it is likely that our


descendants will cease to work in the sense that we do now. They will
probably occupy their days with a variety of social, recreational and
artistic pursuits, not unlike today's comfortable retirees or the wealthy
leisure classes.

Robot Systems
Robots are comprised of several systems working together as a
whole. The type of job the robot does dictates what system elements it needs. The
general categories of robot systems are:

• Controller
• Body
• Mobility
• Power
• Sensors
• Tools

Controller
The controller is the robot's brain and controls the robot's
movements. It's usually a computer of some type which is used to store information
about the robot and the work environment and to store and execute programs
which operate the robot.
The control system contains programs, data algorithms, logic
analysis and various other processing activities which enable the robot to perform.

The picture above is an AARM Motion control system. AARM


stands for Advanced Architecture Robot and Machine Motion and it's a commercial
product from American Robot for industrial machine motion control.
Industrial controllers are either non-servos, point-to-point servos or continuous
path servos.
A non-servo robot usually moves parts from one area to another and
is called a "pick and place" robot. The non-servo robot motion is started by the
controller and stopped by a mechanical stop switch. The stop switch sends a signal
back to the controller which starts the next motion.
A point-to-point servo moves to exact points so only the stops in the
path are programmed.
A continuous path servo is appropriate when a robot must proceed
on a specified path in a smooth, constant motion.
Mobile robots can operate by remote control or autonomously. A remote control
robot receives instructions from a human operator. In a direct remote control
situation, the robot relays information to the operator about the remote
environment and the operator then sends the robot instructions based on the
information received. This sequence can occur immediately (real-time) or with a
time delay.

Autonomous robots are programmed to understand their


environment and take independent action based on the knowledge they posses.
Some autonomous robots are able to "learn" from their past encounters. This means
they can identify a situation, process actions which have produced
successful/unsuccessful results and modify their behavior to optimize success. This
activity takes place in the robot controller.
Body
The body of a robot is related to the job it must perform.
Industrial robots often take the shape of a bodyless arm since its job requires it to
remain stationary relative to its task. Space robots have many different body
shapes such as a sphere, a platform with wheels or legs, or a ballon, depending on
it's job.
The free-flying rover, Sprint Aercam is a sphere to minimize
damage if it were to bump into the shuttle or an astronaut.
Some planetary rovers have solar platforms driven by wheels to traverse
terrestrial environments. Aerobot bodies are balloons that will float through the
atmosphere of other worlds collecting data. When evaluating what body type is
right for a robot, remember that form follows function.

Mobility
How do robots move? It all depends on the job they have to do
and the environment they operate in.

In the Water: Conventional unmanned, submersible robots are


used in science and industry throughout the oceans of the world. You probably saw
the Jason AUV at work when pictures of the Titanic discovery were broadcast. To
get around, automated underwater vehicles (AUV's) use propellers and rudders to
control their direction of travel.

One area of research suggests that an underwater robot like


RoboTuna could propel itself as a fish does using its natural undulatory motion. It's
thought that robot that move like fish would be quieter, more maneuverable and
more energy efficient.
On Land: Land based rovers can move around on legs, tracks or
wheels. Dante II is a frame walking robot that is able to descend into volcano
craters by rapelling down the crater. Dante has eight legs; four legs on each of two
frames. The frames are separated by a track along which the frames slide relative
to each other. In most cases Dante II has at least one frame (four legs) touching
the ground.
An example of a track driven robot is Pioneer, a robot
developed to clear rubble, make maps and acquire samples at the Chornobyl Nuclear
Reactor site. Pioneer is track-driven like a small bulldozer which makes it suitable
for driving over and through rubble. The wide track footprint gives good stability
and platform capacity to deploy payloads.
Many robots use wheels for locomotion.
In the Air/Space:
Robots that operate in the air use engines and thrusters to
get around.
One example is the Cassini, an orbiter on its way to Saturn. Movement and
positioning is accomplished by either firing small thrusters or by applying a force to
speed up or slow down one or more of three "reaction wheels." The thrusters and
reaction wheels orient the spacecraft in three axes which are maintained with
great precision.
TRANSFORMATION ‘MATRIX’:
Robot arm kinematics deals with the analytical study of the geometry
of motion of a robot arm with respect to the fixed reference coordinate system without
regard to the forces/moments that cause the motion.

Thus, kinematics deals with the analytical description of the spatial


displacement of the robot as a function of time, in particular the relations between the
joint variable space and the position and orientation of the end-effector of a robot arm.
There are two fundamental problems in robot arm kinematics.
The first problem is usually referred to as the direct (or forward)
kinematics problem,
Second problem is the inverse kinematics (or arm solution) problem.
Since the independent variables in a robot arm are the joint variables, and a task is
usually stated in terms of the reference coordinate frame, the inverse kinematics problem
is used more frequently.

Denavit and Hartenberg [1955] proposed a systematic and generalized


approach of utilizing matrix algebra to describe and represent the spatial geometry of the
links of a robot arm with respect to a fixed reference frame. This method uses a 4*4
homogeneous transformation matrix to describe the spatial relationship between two
adjacent right mechanical links and reduces the direct kinematics problem to finding an
equivalent 4*4 homogeneous transformation matrix that relates the spatial displacement
of the hand coordinate frame to the reference coordinate frame.

Power
Power for industrial robots can be electric, pneumatic or
hydraulic. Electric motors are efficient, require little maintenance, and aren't very
noisy. Pneumatic robots use compressed air and come in a wide variety of sizes. A
pneumatic robot requires another source of energy such as electricity, propane or
gasoline to provide the compressed air.
Hydraulic robots use oil under pressure and generally
perform heavy duty jobs. This power type is noisy, large and heavier than the other
power sources. A hydraulic robot also needs another source of energy to move the
fluids through its components.
Pneumatic and hydraulic robots require
maintenance of the tubes, fittings and hoses that connect the
components and distribute the energy.
Two important sources of electric power
for mobile robots are solar cells and batteries.
Sensors
Sensors are the perceptual system of a robot and
measure physical quantities like contact, distance, light, sound, strain,
rotation, magnetism, smell, temperature, inclination, pressure, or altitude.
Sensors provide the raw information or signals that must be
processed through the robot's computer brain to provide meaningful information.
Robots are equipped with sensors so they can have an understanding of their
surrounding environment and make changes in their behavior based on the
information they have gathered.
Sensors can permit a robot to have an adequate field of view, a
range of detection and the ability to detect objects while operating in real or near-
real time within its power and size limits.
Additionally, a robot might have an acoustic sensor to detect
sound, motion or location, infrared sensors to detect heat sources, contact sensors,
tactile sensors to give a sense of touch, or optical/vision sensors. For most any
environmental situation, a robot can be equipped with an appropriate sensor. A
robot can also monitor itself with sensors.

Tools
As working machines, robots have defined job duties and carry
all the tools they need to accomplish their tasks onboard their bodies. Many robots
carry their tools at the end of a manipulator.
The manipulator contains a series of segments, jointed or sliding
relative to one another for the purpose of moving objects.
The manipulator includes the arm, wrist and end-effector. An
end-effector is a tool or gripping mechanism attached to the end of a robot arm to
accomplish some task.
It often encompasses a motor or a driven mechanical device.
An end-effector can be a sensor, a gripping device, a paint gun, a drill, an arc
welding device, etc. There are many examples of robot tools that you will discover
as you examine the literature associated with this site. To get you going, two good
examples are listed below.
Tools are unique to the task the robot must perform.
Advantages of Robotics
The advantages are obvious - robots can do things we
humans just don't want to do, and usually do it cheaper.
Robots can do things more precise than humans and
allow progress in medical science and other useful advances.

Educational Goals
Robotics was designed to introduce the science behind the design and operation of
robots, and after interacting with the exhibit, you will be able to:

1. Define a robot as a machine that gathers information about its environment


(senses) and uses that information (thinks) to follow instructions to do work
(acts).
2. Recognize the advantages and limitations of robots by comparing how robots
and humans sense, think, and act and by exploring uses of robots in
manufacturing, research, and everyday settings.
3. Understand your connection with technology and create an excitement about
science and math that will prepare you for a workplace in which computer,
robotics, and automation are common and essential.

Each major thematic area of the exhibit has specific educational goals accomplished
using hands-on activities that compare how human and robotic systems sense, think,
and act.

APPLCIATONS:
Robots have many applications.

Industrial robots have long been used for welding and painting.
Laboratory robots are used for many repetitive tasks in chemistry,
biology and clinical chemistry labs.

Medical robots are being designed for Computer Assisted Surgery.

In space, robots are being used for unmanned missions to planets,


comets and asteroids.

The Impact of Robotics on Society


Robotics and Sensors in Crop Production
Goals:

• To develop robotic systems for selective harvesting of field crops.


• To develop sensors for locating fruit by robotic harvesters.
• To develop sensors for determining fruit ripeness and for predicting shelf-life
nondestructively, and
• To develop sensors for site-specifically application of chemicals onto individual
plants.

Statement of Problem:

Intelligent machines must be capable of sensing the environment


and properties of the bio-materials which it is handling.
Robotic harvesting requires sensors to detect and precisely locate
all the fruit. In many cases the robot must also determine ripeness in order to selectively
harvest only the fruit ready for market.
Research at Purdue University and the Agricultural Research
Organization (Volcani Center) in Israel has developed the robotic manipulators, machine
vision and image processing technologies necessary to harvest musk melons. The VIP-
ROMPER (Volcani Institute - Purdue University Robotic Melon Picker) traveling at 2
cm/sec with a manipulator speed of 75 cm/sec was able to detect over 93% of the fruit,
and successfully picked more than 85% of the melons.

Current Activities:
Visible features of plants enable humans to distinguish weeds
from crops, and to see individual parts of plants. A machine vision and image processing
system capable of extracting and classifying these same features could be used to initiate
a spot sprayer or other control technology.
The general objective of this research is to develop sensor
technologies which permit chemicals to be applied site-specifically on plants in a manner
which minimizes the quantity required to control crop pests.

The specific objectives are:

1. Develop machine vision hardware and image processing software to


locate small seedlings, distinguish weeds from the cultivated crop, and
measure parameters which affect the quantity and location of chemical
to be applied;

2. Develop image processing and pattern recognition algorithms which


can interpret the sensed data in real-time, and;

3. Develop artificial intelligence algorithms (expert systems, neural


networks, etc.) capable of making real-time decisions concerning when
and how much chemical to apply, based on perceived conditions.

The first step to complete these objectives is to create a database of


multi-spectral, digital, images of plants which contain the desired
features. This database of plant images is currently being created, and
will become accessible to cooperators over the World Wide Web to test
and evaluate the performance of image processing and pattern
recognition algorithms.

Today’s Robotics:
Today, robots are enjoying resurgence. Faster and cheaper
computer processors make robots smarter and less expensive. Meanwhile, researchers
are working on ways to make robots move and "think" more efficiently. Although most
robots in use today are designed for specific tasks, the goal is to make universal robots,
robots flexible enough to do just about anything a human can do.

Problems with Robotics


 As with any machine, robots can break and even
cause disaster.
 They are powerful machines that we allow to control
certain things.
 When something goes wrong, terrible things can
happen. Luckily, this is rare because robotic systems
are designed with many safety features that limit the
harm they can do.
There's also the problem of evil people using robots
for evil purposes. This is true today with other forms
of technology such as weapons and biological
material.
 Of course, robots could be used in future wars. This
could be good or bad. If humans perform their
aggressive acts by sending machines out to fight
other machines, that would be better than sending
humans out to fight other humans.
 Steams of robots could be used to defend a country
against attacks while limiting Human casualties.
Either way, human nature is the flawed component
that's here to stay.

CONCLUSION
ROBOT – ‘THE TERMINATOR’:

The ROBOT which terminates every job within a shortest period has no termination.
Human brain has boundaries up to which it thinks, but for Computers no
limitations .That is we have to use the brain up to the capacity of neurons in our brain.
But there is no limit to computer memory.

Das könnte Ihnen auch gefallen