Sie sind auf Seite 1von 19

Chapter 6 Search, Games and Problem Solving Informed Search Using Search Heuristics Heuristics are guidelines of where

it is best to look for a solution may nd a solution faster than uninformed search, but without guarantee may result in a solution not being found closely linked with the need to make real-time decisions with limited resources in practice, a good solution found quickly is preferred over an optimal solution, but expensive to nd

Heuristic evaluation function, f (s) used to mathematically model a heuristic goal is to nd with little eort a solution with minimal total cost how good is an estimate of the distance from the node to the goal if f (m) < f (n), then m is more likely to be on an optimal path to the goal n Heuristic search methods (informed search) uses heuristics to guide the search The 8-puzzle problem typically tree depth of 20 average branching factor of 3 exhaustive search: 320 3.5 billion states there are only 9! = 362880 possible (unique) states use heuristics to avoid repeated states

Admissibility of heuristics should not overestimate the cost of changing a given state to the goal state a search heuristic is admissible when it is always guaranteed to nd the minimal cost path to the goal Heuristics for 8-puzzle h1(n): count how many tiles are in the wrong place h2(n): consider how far each tile has to move to its correct state sum Manhattan distances of each tile from its correct position Manhattan distance = sum of horizontal & vertical moves to take from one to another position h2 is more informed than h1 search using h2 will always perform better than using h1

h3(n): consider that there is extra diculty involved if two tiles have to move past each other, because tiles can not jump over each other k (n) denotes the number of direct swaps that need to be made between adjacent tiles to move them into correct sequence h3(n) = h2(n) + 2k (n) h3 is more informed than h2(n) Relaxed problems has fewer constraints 8-puzzle: a tile can move to an adjacent square regardless of whether that square is empty or not

Many non-dominated heurisics: What then? h(n) = max{h1(n), h2(n), , hK (n)} if all hk are admissible, will h(n) be admissible? h dominates all hk Monotonicity search method is monotone if it always reaches a given node by the shortest possible path if it reaches a given node at dierent depths, it is not monotone monotone search method must be admissible, provided there is only one goal state

Example: Modied traveling salesman problem nd best route between two cities: A to F

Figure 1: Modied Traveling Salesman Problem

Search space

Figure 2: The Search Space

Solution using DFS? And BFS?


6

Hill Climbing Informed search How does it work? Disadvantages of hill climbing?

Figure 3: Hill Climbing Illustrated

Steepest ascent hill climbing follow the path to the highest next point

Best-First Search Heuristic: expand the best path from the current partially developed tree Greedy search Example: 8-puzzle h1(n) = #tiles out of place

Figure 4: 8-tiles, with h1 (n); #tiles out of place

h2(n) = #tiles out of place + node depth

Figure 5: 8-tiles, with h2 (n); Number of tiles out of place + depth

Beam Search Best-rst search within a given beam width beam width: limits number of nodes expanded Applied to 8-puzzle? A Search f (n) = g (n) + h(n) g (n) = actual cost of path from start node to n h(n) = an underestimate of the distance from n to the goal f (n) = path-based evaluation function expand the node with the smallest f Optimality if h(n) is always an underestimate, then A is gauranteed to nd the shortest path to the goal A is optimal, if h is admissible, and complete if non-admissible heuristic is used, the algorithm is called A
10

Completeness A is complete if the graph it searches is locally nite and if every arc has a non-zero cost locally nite if graph has nite branching factor Uniform Cost Search Also called branch and bound search Basically A search with h(n) = 0 Complete and optimal, provided that cost of a path increases monotonically Dijkstras algorithm

11

Greedy Search A search with g (n) = 0 Always selects the path with the lowest estimate of the distance to the goal Worst case: may never nd a solution Not optimal, and may follow extremely costly paths Example of non-optimal path: if rst step on shortest path toward goal is longer than other paths

12

Iterative-Deepening A As memory requirements grow exponentially with depth of the goal in the search tree, even though heuristics reduce the branching factor IDA combines iterative-deepening with A: optimal and complete has the low memory requirements of depthrst search Executes a series of depth rst searches, but uses a cut-o cost value as the depth limit, and not the actual depth of the tree: In the rst search, the cost cut-o value is (n0) = g (n0) + h (n0) = h (n0) =f The cost of an optimal path may just be equal to (n0) h(n0), where It can not be less, as h h(n0) is the true cost; remember we work with underestimates of the true cost to the goal Expand nodes in a depth-rst fashion, back(n) exceeds tracking whenever f
13

If goal node is found, then stop. Otherwise, if goal node not found, then the cost of the optimal path must be greater than ; so, the value of has to be increased value of the unexpanded Set to the lowest f nodes Repeat the depth-rst searches Is complete and optimal Disadvantage: repeated node expansions, therefore, in general more costly

14

Real-Time A Let n be the current node, and m is a child node of n g (m) is the cost (distance) from n to m, and not the cost from the root node Will backtrack to n, i.e. not further expand from m, if the cost of backtracking plus the estimated cost to solve the problem from n is less than the estimated cost to solve the problem from m, i.e. (n) < h(m) g (m) + h

15

Recursive Best-First Search (RBFS) Slightly more memory than IDA, but generates fewer nodes
k kn n

n1

n2

n3
Figure 6: RBFS Illustrated

The node expansion process (refer to gure 6) Node n is expanded values are computed for the children The f n1, n2, n3 values of n and all of its ances Then the f tors in the search tree are recomputed called values backing-up of f

16

Computing backed-up values: backed-up value of a successor, ni, of just (ni) expanded node, ni, is just f backed-up value of any ancestor node, n, is (n) = min f (ni) f
ni

If the a successor of n, say n1, has the smallest value over all unexpanded nodes, then n1 is f futher expanded

17

Let n be an unexpanded node in the search tree, where n is not a successor of n (n ) is the smallest f value over all unex If f panded nodes, then nd the common ancestor of n If k is the common ancestor, and kn is the successor of k leading to n, then delete the entire search tree below kn, and let (kn) = min f (qi) f
qi

over all unexpanded nodes, qi of the subtree of kn expand node n

18

Island Driven Search Assumes a list of subgoals exists, i.e. n0, n1, , ng First nd path from n0 to n1, then n1 to n2, then... ng1 to ng Is island driven search admissible? Hierarchical Search Similar to island-driven search, but no explicit set of islands are available Assume the availability of macro-operators that can make large steps in an implicit search space of islands Perform a metal-level search, to produce a path of macro-operators from from base-level start node to base-level goal node Perform base-level searches on islands Admissible?

19

Das könnte Ihnen auch gefallen