Beruflich Dokumente
Kultur Dokumente
Prepared by,
Mrs. Madhumathi Rajesh
Artificial Intelligence, 2nd Ed.,Elaine Rich & Kevin Knight, Tata McGraw Hill, 1999
Syllabus
Introduction to Al Chapter -1 Problem formulation, Problem Definition Production systems, Control
strategies, Search strategies. Problem characteristics, Production system characteristics Chapter -2 Related algorithms, Measure of performance and analysis of search algorithms Chapter -3
What is AI?
Intelligence: ability to learn, understand and think (Oxford dictionary) AI is the study of how to make computers make things which at the moment
people do better.
The AI Problems
Simple Game playing , Theorem proving Knowledge Intelligence common sense (GPS General problem solving
Task) Perceptions (medical diagnosis & Chemical Analysis) Natural Understanding
Mundane Tasks: Perception Vision Speech Natural Languages Understanding Generation Translation Common sense reasoning Robot Control Formal Tasks Games : chess, checkers etc Mathematics: Geometry, logic, Proving properties of programs Expert Tasks: Engineering ( Design, Fault finding, Manufacturing planning) Scientific Analysis Medical Diagnosis Financial Analysis
Task Domains of AI
What is an AI Technique?
Intelligence requires Knowledge Knowledge possesses less desirable properties such as:
Voluminous Hard to characterize accurately Constantly changing Differs from data that can be used Knowledge captures generalization It can be understood by people who must provide it It can be easily modified to correct errors. It can be used in variety of situations
AI technique is a method that exploits knowledge that should be represented in such a way that:
10
11
Movetable: A large vector of 19,683 elements ( 3^9), each element is 9-element vector.
Algorithm: 1. 2. 3. View the vector as a ternary number. Convert it to a decimal number. Use the computed number as an index into Move-Table and access the vector stored there. Set the new board to that vector.
12
1 2 3 4 5 6 7 8 9
14
8 3 4 1 5 9 6 7 2
15 (8 + 5)
18
19
21
What did X Y
24
Program 2:
26
Requires:
Natural language Knowledge representation Automated reasoning Machine learning
Human Interrogator AI System Human
A set of all possible states for a given problem is known as state space of the problem. Representation of states is highly beneficial in AI because they provide all possible states, operations and the goals.
If the entire sets of possible states are given, it is possible to trace the path from
the initial state to the goal state and identify the sequence of operators necessary for doing it.
Production Rules:
3.Specify one or more states that would be acceptable as solutions to the problem called 4.Specify a set of rules that describe the actions. Order of application of the rules is called
control strategy.
Production Systems
A production system consists of:
A set of rules, each consisting of a left side and a right hand side. Left hand side or pattern determines
the applicability of the rule and a right side describes the operation to be performed if the rule is applied.
One or more knowledge/databases that contain whatever information is appropriate for the particular
task. Some parts of the database may be permanent, while other parts of it may pertain only to the solution of the current problem. The information in these databases may be structured in any appropriate way. of resolving the conflicts that arise when several rules match at once.
A control strategy that specifies the order in which the rules will be compared to the database and a way A rule applier.
Control Strategies
Requirement for a good Control Strategy It should cause motion: In water jug problem, if we apply a simple control strategy of
starting each time from the top of rule list and choose the first applicable one, then we will never move towards solution.
control strategy, let us say, choose a rule randomly from the applicable rules then definitely it causes motion and eventually will lead to a solution. But one may arrive to same state several times. This is because control strategy is not systematic.
Construct a tree with the initial state as its root. Generate all the offspring of the root by applying each of the applicable rules to
the initial state.
The major problems of this search procedure are: 1. Amount of time needed to generate all the nodes is considerable because of the time complexity. 2. Memory constraint is also a major hurdle because of space complexity. 3. The Searching process remembers all unwanted nodes, which is of no practical use for the search.
Heuristic is a technique that improves the efficiency of a search process possibly by sacrificing claims
of systematicity and completeness. It no longer guarantees to find the best answer but almost always finds a very good answer. salesman) in less than exponential time.
Using good heuristics, we can hope to get good solution to hard problems (such as travelling There are general-purpose heuristics that are useful in a wide variety of problem domains. We can also construct special purpose heuristics, which are domain specific.
Heuristic Function
This is a function that maps from problem state descriptions to measures of desirability, usually represented as numbers. Which aspects of the problem state are considered, how those aspects are evaluated, and the weights given to individual aspects are chosen in such a way that
the value of the heuristic function at a given node in the search process gives as good an
estimate as possible of whether that node is on the desired path to a solution. process toward a solution.
Well designed heuristic functions can play an important part in efficiently guiding a search
works by selecting the locally superior alternative. For such algorithms, it is often possible to prove an upper bound on the error which provide reassurance that one
Eg TSP
In many AI problems, it is often hard to measure precisely the goodness of a particular solution.
For real world problems, it is often useful to introduce heuristics based on relatively unstructured knowledge. It is
impossible to define this knowledge in such a way that mathematical analysis can be performed.
Problem Characteristics
Analysing the problem:
1. Is the problem decomposable into a set of independent smaller sub problems? 2. Can solution steps be ignored or at least undone if they prove to be unwise? 3. Is the universe Predictable? 4. Is a good solution Absolute or Relative ? 5. Is the solution state or a path? 6. What is the Role of knowledge? 7. Does the task require Interaction with a person? 8. Is the knowledge base consistant?
Example1:(Ignorable):In theorem proving-(solution steps can be ignored) Example2: (Recoverable):8 puzzle-(solution steps can be undone) Example3: (Irrecoverable): Chess(solution steps cannot be undone)
of operators that has a good probability of leading to a solution. We need to allow for a process of plan revision to take place.
In travelling salesman problem, our goal is to find the shortest route. Unless all Any path problem can often be solved in reasonable amount of time using Best path problems are in general computationally harder than any-path.
Water jug : Here it is not sufficient to report that we have solved , but the
path that we found to the state (2,0). Thus the a statement of a solution to this problem must be a sequence of operations ( Plan) that produces the final state.
Decision on using one of these approaches will be important in the choice of problem
solving method.
Target problem: A man is standing 150 ft from a target. He plans to hit the target by shooting
a gun that fires bullets with velocity of 1500 ft/sec. How high above the target should he aim?
Solution:
Velocity of bullet is 1500 ft./sec i.e., bullet takes 0.1 sec to reach the target. Assume bullet travels in a straight line. Due to gravity, the bullet falls at a distance (1/2)gt2= (1/2)(32)(0.1)2 = 0.16ft. So if man aims up 0.16 feet high from the target, then bullet will hit the target.
Now there is a contradiction to the fact that bullet travel in a straight line because the bullet in
actual will travel in an arc. Therefore there is inconsistency in the knowledge used.
Non-Monotonic Production system Partially commutative Production system: property that if application of a particular sequence of rules Commutative Production system.
transforms state x to state y, then permutation of those rules allowable, also transforms state x into state y (Theorem Proving)
Partially Commutative
Theorem proving
Robot Navigation
Chemical Synthesis
Bridge
How to select applicable rules ( Matching) How to represent each node of the search process ( knowledge representation
problem
Generate-and-Test
Algorithm 1. Generate a possible solution.
2. Test to see if this is actually a solution. 3. Quit if a solution has been found. Otherwise, return to step 1.
Pros and Cons:
Acceptable for simple problems. Inefficient for problems with large space.
Generate-and-Test
Exhaustive generate-and-test. Heuristic generate-and-test: do not consider paths that seem unlikely to lead to a
solution.
Plan generate-test:
Create a list of candidates. Apply generate-and-test to that list.
Hill Climbing
Searching for a goal state = Climbing to the top of a hill
Generate-and-test + direction to move. Heuristic function to estimate how close a given state is to a goal state.
Algorithm for simple hill climbing: 1. Evaluate the initial state. Loop until a solution is found or there are no new operators left to be applied: Select and apply a new operator Evaluate the new state: goal quit better than current state new current state Note: Evaluation function needs task-specific knowledge
2.
2.
Loop until a solution is found or a complete iteration produces no change to current state: SUCC = a state such that any possible successor of the current state will be better than SUCC (the worst state). For each operator that applies to the current state, evaluate the new state: goal quit better than SUCC set SUCC to this state SUCC is better than the current state set the current state to SUCC.
Overcome Disadvantage
Ways Out
Backtrack to some earlier node and try going in a different direction. Make a big jump to try to get in a new section. Moving in several directions at once.
A D C B
Goal
D C B A
Blocks World
71
Start
A D C B
Goal
D C B A
Blocks World
Local heuristic: +1 for each block that is resting on the thing it is supposed to be resting on. 1 for each block that is resting on a wrong thing.
72
A D C B
2 D C B A
73
This will halt Hill climbing, because Local maximum will be reached
D C B A D C B C B 0 0 D A
2 A
C B
0 A D
74
Start
A D C
Goal
D C B
B Blocks World A
Global heuristic: For each block that has the correct support structure: +1 to every block in the support structure. For each block that has a wrong support structure: 1 to every block in the support structure.
75
D C B A D C B C B 6 2 D A
3 A
C B
1 A D
76
77
Simulated Annealing
A variation of hill climbing in which, at the beginning of the process, some
downhill moves may be made.
To do enough exploration of the whole space early on, so that the final
solution is relatively insensitive to the starting state. ridge.
Simulated Annealing
Physical Annealing
Physical substances are melted and then gradually cooled until some solid state is
reached.
The goal is to produce a minimal-energy state. Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal
will be attained.
Simulated Annealing
E < 0 new current state else new current state with probability eE/kT.
80
Best-First Search
Depth-first search: not all competing branches having to be expanded. Breadth-first search: not getting trapped on dead-end paths.
Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one.
A B G 6 H 5 C 5 D E 4 F 6 G 6 B H 5
A C 5 I 2 D E J 1 F 6
82
Best-First Search
Algorithm 1. OPEN = {initial state}.
Best-First Search
Greedy search:
h(n) = estimated cost of the cheapest path from node n to a goal state.
Uniform-cost search:
g(n) = cost of the cheapest path from the initial state to node n.
85
Best-First Search
Greedy search:
h(n) = estimated cost of the cheapest path from node Neither optimal nor complete n to a goal state.
86
Best-First Search
Algorithm A* (Hart et al., 1968):
f(n) = g(n) + h(n) h(n) = cost of the cheapest path from node n to a g(n) = cost of the cheapest path from the initial state goal state. to node n.
87
A* Algorithm
Problem Reduction
Goal: Acquire TV set
D 5
A 9 B 3 9 C 4
C 15 F 3
Constraint Satisfaction
Many AI problems can be viewed as problems of constraint satisfaction.
Cryptarithmetic puzzle:
Constraint Satisfaction
As compared with a straightforward search procedure, viewing a problem as one of constraint satisfaction
can reduce substantially the amount of search. Operates in a space of constraint sets.
Initial state contains the original constraints given in the problem. A goal state is any state that has been constrained enough.
Two-step process: 1. 2. Constraints are discovered and propagated as far as possible. If there is still not a solution, then search begins, adding new constraints.
93
send+more.docx
Initial state: No two letters have the same value. The sum of the digits must be as shown. M=1 S = 8 or 9 O=0 N=E+1 C2 = 1 N+R>8 E9 E=2 N=3 R = 8 or 9 2 + D = Y or 2 + D = 10 + Y C1 = 0 C1 = 1 SEND + MORE MONEY
Y=0
Means-ends analysis
Involves detection of difference between current state and goal state Once difference identified, an operator to reduce the difference must be
found But perhaps operator cannot be applied to current state Subproblem of getting to state where operator can be applied Operator may not result in goal state Second subproblem of getting from new state to goal state
MEA
MEA process applied recursively Each rule (operator) has
LHS preconditions and RHS aspects of problem state changed. Difference table of rules and differences they can reduce.
Example:
Problem for household robot: moving desk with 2 things on it from one room to another. Main difference between start and goal state is location. Choose PUSH and CARRY
Push
Carry
Walk
Pickup
Putdown
Place
Preconditions at(robot,obj) &large (obj) &clear (obj) & arm empty at(robot, obj) &Small (obj)
Move object
Move robot
WALK(loc) PICKUP(obj)
Be holding object
Means-Ends Analysis
1. Compare CURRENT to GOAL. If no differences, return. 2. Otherwise select most important difference and reduce it by doing the following until success or failure is indicated. (a) Select an as yet untried operator O that is applicable to the current difference. If there are no such operators then signal failure. (b) Attempt to apply O to the current state. Generate descriptions of two states O-START a state in which Os preconditions are satisfied and O-RESULT, the state that would result if O were applied in O-START. (c) If (FIRST-PART MEA (CURRENT,O-START) AND (LAST-PART MEA (O-RESULT, GOAL) are successful then signal success.