Sie sind auf Seite 1von 11

TERM PAPER

OF

INFORMATION SECURITY and PRIVACY

ON

ROBOTICS, VISION and PHYSICAL MODELING

Submitted to: Submitted By:

INDEX

S.N o.
1. 2. 3. 4. 5. 6. 7.

Contents
Acknowledgement Introduction Research Methodology Literature reviews Story of Full Research Major Findings References

ACKNOWLEGDEMENT
This Term Paper is the culmination of many forces that had been working together in unison. I am therefore deeply indebted to all those without whose support, it would have been impossible to achieve this stage of work.

It is my distinct honour and indeed a great privilege to have worked under the dynamic teaching and able guidance of Mam Tajinder Kaur (Lecturer of Information Security and Privacy, Lovely Professional University) for her whole hearted help, kind inspiration, keen interest, constructive criticism and most valuable suggestions in the subject.

I would also like to convey my gratitude to the NIC Deptt. Of Lovely Professional University by the help of whom I could do my research work over the internet without any problems.

Finally, words fail me to express my thanks to my parents and friends without whose constant motivation, I would never have been able to complete this term paper.
(PANKAJ BHATIA)

INTRODUCTION
Description: Robotics has often been described as the intelligent connection of the perception to action. Robot actuators provide the action function. A variety of sensors provide the perception capability. Computers provide the framework for integration/connection as well as the intelligence needed to coordinate in a meaningful way the perception and action capabilities. Machine vision is one of the most powerful perception mechanisms. It involves extracting, characterizing and interpreting information from images in order to identify or describe objects in the environment. Autonomous robot systems are designed to operate in uncertain and highly dynamical environments. Applications: The range of applications includes robot systems operating in industrial, domestic, and difficult to reach or hazardous environments. Machine vision and other intelligent sensor systems, and human-computer interfaces are developed for industrial process control, medical applications, for robotic vehicle navigation, autonomous robot agents, for environment monitoring and rescue missions.

RESEARCH METHODOLOGY
The core objective of this paper is to understand the communication aspect and how it can be best or optimally utilised while doing negotiations.
Secondary Research methodology using internet is implemented to

study the findings and researches

LITERATURE REVIEW
1. Model-Based Recognition in Robot Vision Abstract: This paper presents a comparative study of model-based object-recognition algorithms for robot vision. The goal of these

algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form it is commonly referred to as the bin-picking problem, in which the parts to be recognized are presented in a jumbled bin. The paper gives valuable insights into 2-D, 2.1/2-D, and 3-D object representations, which are used as the basis for the recognition algorithms. Three central issues common to each category, namely, feature extraction, modeling, and matching, are examined in detail. An evaluation and comparison of existing industrial part-recognition systems and algorithms is given, providing insights for progress toward future robot vision systems.

2. Semantic Robot Vision Challenge: Current State and Future Directions Abstract: The Semantic Robot Vision Competition provided in this study is to integrate the ideas under one umbrella, inspiring both collaboration and new research. The task, visual search for an unknown object, is relevant to both the vision and robotics communities. Moreover, since the interplay of robotics and vision is sometimes ignored, the competition provides a venue to integrate two communities. There is also an outline of a number of modifications to the competition to both improve the state-ofthe-art and increase participation.

3. WHY ROBOTS? Abstract: In this paper, arguments for why modeling in the field of artificial language evolution can benefit from the use of real robots is detailed. It is tried to deduce that robotic experimental setups lead to more realistic and robust models, that real-word perception can provide the basis for richer semantics and that embodiment itself can be a driving force in language evolution. There are also reviewing to a variety of robotic experiments that have been carried out for the relevance of the approach.

4. A Novel Chaotic Vision Modeling for Mobile Robots based on Two

Dimensional Chaos Optimization Abstract: The image segmentation of a robot binocular stereo vision system is the key issue in imaging processing. In this paper, the method of 2-D maximum entropy threshold image segmentation with chaos optimization algorithm is used to segment the images information

collected by a robot vision system, and the algorithm is checked by a real robot binocular stereo vision system. Moreover, for a real environment simulation a new programming interface library for IEEE 1394 video devices on Linux is proposed. The simulation experiments of the proposed method in comparison to the best previous research are shown better efficiency in using space information of an image and shortening the calculation time.

5. Humanoid Robotics Abstract: The paper discusses the Motor control for a social robot that poses challenges beyond stability and accuracy. Human observers will perceive motor actions as semantically rich, regardless of whether the robot intends the imputed meaning. Such perception, which constrains the robots physical appearance and movement, can facilitate natural interactions between robot and human. It allows the robot to be readable to make its behavioral intent and motivational state transparent at an intuitive level to those with whom it interacts. It allows the robot to regulate its interactions to suit its perceptual and motor capabilities in an intuitive wayone with which humans naturally cooperate. And it gives the robot leverage over the world far beyond its physical competence. If properly designed, the robots visual behaviors can match human expectations and allow both robot and human to participate in natural and intuitive social interactions.

STORY OF FULL RESEARCH


Robotics is poised to be the next transformative technology. Robots are widely used in manufacturing, warfare, and disaster response, and the market for personal robotics is exploding. Worldwide sales of home robots such as iRobots popular robotic vacuum cleaner are in the millions. In fact, Honda has predicted that by the year 2020, it will sell as many robots as it does cars. Microsoft founder Bill Gates believes that the robotics industry is in the same place today as the personal computer (PC) business was in the 1970s, a belief that is significant given that there are now well over one billion PCsjust three decades after their introduction into the market. Modeling has been a central theme of artificial intelligence. This is very evidently so in computer vision, where the modeling of objects has preoccupied researchers since the dawn of the field some three decades

ago. A good strategy for progress on the vision problem is through physics-based modeling. The idea is to incorporate principles of physical dynamics into conventional geometric models in order to be able to represent not only the shapes of objects, but their physical behaviors as well. Through the use of range scanners, generic biomechanical facial models may be automatically personalized to individuals. Currently, artificial faces support a model-based approach to facial image analysis. In the future, it should be possible to incorporate brains and some degree of intelligence into them as well.

Physical Modeling and Vision The design of an autonomous robotic system which can be used for the control of a powder thermal spraying oxyacetylene torch tip in three dimensional space requires detailed consideration of a range of robot design issues. Robot design covers a very broad range of subjects and this was one of the key reasons for pursuing robotics research.

A framework is set of virtual reality system for the micro world to supervise the manipulation process and virtual environment corresponding to the actual micromanipulation system to preview or simulate motion plan before the actual manipulation. The operator can monitor the microassembly through the master hand with force feedback together with vision feedback. In virtual environment, the virtual objects should not only reflect the shape characters of the actual objects, but also reflect the physical characters of the actual world. To avoid the collision

between virtual objects, analysis os done for the collision check mathematic models, and a collision check model based on FDH bounding box method is set up. The relation between collision detection and collision response is displayed. The modeling of deformation of micro objects under micro force is studied. After analyzing the states of the micro needle during the peg-in-hole process, the differential equation of micro needles deformation is given. The deformation and virtual force are fed back to the operator. The Robotics Modeling, Simulation, and Visualization process captures pertinent behavior of physical systems in mathematical and software representations for use in understanding, control, and operation of physical robotic systems. It provides: High-fidelity physics-based spacecraft landing simulations of spacecraft entry, descent and landing, planetary surface roving, and airship mobility. Ground control software for sequencing, simulation, visualization, and human/robot interaction. Infrastructure for maturing, validating, and evaluating the science return of advanced robotic algorithms. Infrastructure for maintaining large software systems, and their interaction with large databases for planetary mission simulation applications.

CONTROLS and SENSORING Control and navigation of mobile robots have been mainly performed based on exteroceptive sensoring, which assumes that the world position and orientation (pose) of the robot in the environment are well known. However, internal variables such as wheels velocity and position are unknown. This lack of information can render robot control very difficult, and even impossible, when only exteroceptive and low-frequency information are used. Therefore, a system that uses only exteroceptive sensors may turn out to be both slow and unsatisfactory, mainly because fast disturbances cannot be perceived, and, consequently, cannot be corrected. In order to circumvent these problems, many authors have used fast sensor-fusion techniques that combine exteroceptive information with high-frequency proprioceptive data. Another approach to robot control that has been frequently adopted in visual servoing systems is to use exteroceptive information.

Dual Dynamics Design Environment A Practical Design The successful design of robot software requires means to specify, implement and simulate as well as to run and debug the robot software in real-time on physical robots. It is a major challenge to make the Dual Dynamics approach productive in a state-of-the-art design flow. The result of work is the integrated Dual Dynamics Design Environment. It allows to design Dual Dynamics models on a high level of abstraction and to synthesize all code artifacts required to make Dual Dynamics models operative in practice: a documentation, a simulation model, control programs for physical robots and a parameter set for generic test and debug tools. The Dual Dynamics Design Environment comprises the graphical specification and code generation tool Dual Dynamics-Designer, the simulator DDSim and the real-time monitoring tool beTee: DD-Designer allows to specify a Dual Dynamics model in terms of sensors, actuators, sensor pre-processing elements and a hierarchy of coupled behaviors. Each of the processing elements is formulated using a combination of control data flow and differential equations. This specification is the basis for an automatic refinement of all code artifacts required by the tools in our design environment. DDSim allows to simulate a team of robots on a playground. It provides sophisticated simulations of the ball and the sensors of the robots. This includes laser scanner simulation and an emulation of the vision system used by our robots. Since each robot is configured with different behavior systems, we are able to benchmark behavior systems against each other. beTee is a real-time monitoring tool for tracing arbitrary variables of the simulated or (via wireless LAN) the physical robot. GMD-Robots The test bed and demonstrator application for Dual Dynamics approach are soccer playing robots. They take part in the midsize league tournaments of the international RoboCup contest, a very demanding benchmark for mobile robots. RoboCup tournaments have been organized since 1997 with a yearly increasing number of participating

teams from over the world. This year, the first European competition will be held in Amsterdam at the beginning of June.

To interact naturally with computers, we must have computers that can relate to their environment through visual and physical interactions, from recognizing the face of an approaching person to learning about dynamics by bouncing a ball. In robotics, our expertise ranges from sophisticated grasping techniques and novel motion planning methods to complex tool use and experimenting with new types of dynamically stable robots. In computer vision, our strengths include scene modeling, face identification, object recognition, and reading the text of signs in complex outdoor environments. Our graphics group focuses on high speed realistic rendering techniques and visualizing complex lighting effects. A major cross-cutting interest is the desire to model basic learning processes in humans and machines using data acquired from sensors mounted on robots and mobile video cameras. By adapting our computers' strategies of grasping, reaching, moving, and recognizing to real world data rather than to synthetic laboratory data, we are building systems robust enough to operate in realistic scenarios.

MAJOR FINDINGS
o More sophisticated techniques will have to be developed in order to deal with less structured industrial environments and permit more task versatility. These techniques will incorporate higher level modeling (e.g., highly organized graph models containing 2.1/2-D and 3-D descriptions), more powerful feature-extraction methods (e.g., global structural features of object boundaries, surfaces, and volumes), and more robust matching procedures for efficiently comparing large sets of complex models with observed image features. o Multiple, disparate sensor types, including vision, range, and tactile sensors, will significantly improve the quality of the features that can be determined about a scene. o The Semantic Robot Vision Challenge represents competition which provides a venue for embodied recognition. This contest needs numerous modifications in order to have significant impact. o Computational models using real robots benefit from higher realism, increased robustness and richer semantics.

o The multi-threaded API provides a simple and flexible develop environment. It also improves the efficiency of image segmentation. After optimizing an image, o it is easier computer for a computer to study, process and recognize the characteristics of an image, posture control and work piece recognition of robot with a vision sensor.

REFERENCES
1. Robotics Vision and Graphics http://www.cs.umass.edu/faculty/robotics-computer-vision-andgraphics
2. Research in Robotics, Machine Vision and Autonomous Systems

www.site.uottawa.ca/eng/research/RoboticsVision/index.html
3. AI, CogSci and Robotics: Robotics, Agent Modelling and Vision

transit-port.net/AI.CogSci.Robotics/robotics.html 4. Model-Based Recognition in Robot Vision http://citeseerx.ist.psu.edu/viewdoc/download? doi=10.1.1.105.2845&rep=rep1&type=pdf 5. Semantic Robot Directions Vision Challenge: Current State and Future

http://www.cs.ubc.ca/~poojav/papers/IJCAIworkshop.pdf 6. Why Robots ? http://martin-loetzsch.de/publications/loetzsch10why.pdf


7. A Novel Chaotic Vision Modeling for Mobile Robots based on Two

Dimensional Chaos Optimization http://www.ipcsit.com/vol14/8-ICCSM2011-S0016.pdf 8. Humanoid Robotics http://www.cs.yale.edu/homes/scaz/papers/IEEE-vision.pdf

Das könnte Ihnen auch gefallen