Sie sind auf Seite 1von 60

9-04-2013

Uninformed (blind) search algorithms


Breadth-First Search (BFS)
Uniform-Cost Search
Depth-First Search (DFS)
Depth-Limited Search
Iterative Deepening
Best-First Search

HW#1 due today


HW#2 due Monday, 9/09/13, in class
Continue reading Chapter 3

Formulate Search Execute


1. Goal formulation

2. Problem formulation
3. Search algorithm
4. Execution

A problem is defined by four items:


1.
2.
3.
4.

initial state
actions or successor function
goal test (explicit or implicit)
path cost ( c(x,a,y) sum of step costs)

A solution is a sequence of actions leading


from the initial state to a goal state

Search algorithms have the following basic form:

do until terminating condition is met


if no more nodes to consider then return fail;
select node; {choose a node (leaf) on the tree}
if chosen node is a goal then return success;
expand node; {generate successors & add to tree}

Analysis
b = branching factor
d = depth
m = maximum depth

g(n) = the total cost of the path on the search


tree from the root node to node n
h(n) = the straight line distance from n to G

n
h(n)

S
5.8

A
3

B
2.2

C
2

G
0

Uninformed search strategies use only the


information available in the problem
definition

Breadth-first search
Uniform-cost search

Depth-first search
Depth-limited search
Iterative deepening search

Expand shallowest unexpanded node


Implementation:
fringe is a FIFO queue, i.e., new
successors go at end

idea: order the branches under each node so that


the most promising ones are explored first

g(n) is the total cost of the path on the search


tree from the root node to node n
sort the open list by increasing g(), that is,
consider the shortest partial path first

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Expand deepest unexpanded node


Implementation:

fringe = LIFO queue, i.e., put successors at front

Complete?
No: fails in infinite-depth spaces, spaces with
loops
Modify to avoid repeated states along path
complete in finite spaces

Time?
O(bm): terrible if m is much larger than d

but if solutions are dense, may be much faster than


BFS

Space?

O(bm), i.e., linear space!


Optimal?
No

= depth-first search with depth limit l,


i.e., nodes at depth l have no successors

Recursive implementation:

Number of nodes generated in a depth-limited search


to depth d with branching factor b:

NDLS = b0 + b1 + b2 + + bd-2 + bd-1 + bd

Number of nodes generated in an iterative deepening


search to depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + + 3bd-2 +2bd-1 +
1bd

For b = 10, d = 5,

Overhead = (123,456 - 111,111)/111,111 = 11%

NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111


NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

Complete?
Yes
Time?

(d+1)b0 + d b1 + (d-1)b2 + + bd = O(bd)

Space?

O(bd)

Optimal?
Only if step cost = 1; otherwise NO

Problem formulation usually requires


abstracting away real-world details to define
a state space that can feasibly be explored

Variety of uninformed search strategies


Iterative deepening search uses only linear
space and not much more time than other
uninformed algorithms

Idea: use an evaluation function f(n) for each


node
estimate of "desirability"
Expand most desirable unexpanded node

Implementation:
Order the nodes in the Open List (fringe) in
decreasing order of desirability
Special cases:

greedy best-first search


A* search

g(n) path-cost function


= cost of the path from the root to node n
found so far (less than or equal to g*(n))
h(n) heuristic function
estimates the cost of a path from node n to the
closest goal node (
f(n) evaluation function
measure of how likely node n is part of a
solution
one possibility:
f(n) = g(n) + h(n)

Possible evaluation functions:


f(n) = probability that a node is on the right path
f(n) = distance function (measure of the difference
between node n & the nearest goal node)
f(n) = g(n)
Uniform Cost
f(n) = h(n)
Greedy
f(n) = g(n) + h(n) A*

estimates the total cost of a solution path


which goes through node n

Evaluation function f(n) = h(n) (heuristic)


= estimate of cost from n to goal
e.g., hSLD(n) = straight-line distance from n to
Bucharest
Greedy best-first search expands the node
that appears to be closest to goal

Complete?
No can get stuck in loops,
e.g., Iasi Neamt Iasi Neamt
Time?
O(bm), but a good heuristic can give
dramatic improvement
Space?
O(bm) -- keeps all nodes in memory
Optimal?
No

Idea: avoid expanding paths that are already


expensive
prune longer paths (if there is >1 path from
the root to node n, only keep the shortest on
the search tree)
Evaluation function f(n) = g(n) + h(n)
g(n) = lowest cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n
to goal

f(n) estimates the total cost of a solution path


which goes through node n

f(n) = g(n) + h(n)


lowest-cost path
from S to n
(found so far)

heuristic estimate
of cost from n to G

for a node, N,

N
Ng(N)

h(N)

heuristic function
(superscript)

path-cost function
(subscript)

A heuristic h(n) is admissible if for every


node n, h(n) h*(n), where h*(n) is the true
cost to reach the goal state from n.
An admissible heuristic never overestimates
the cost to reach the goal, i.e., it is
optimistic
Example: hSLD(n) (never overestimates the
actual road distance)
Theorem: If h(n) is admissible, A* using
TREE-SEARCH is optimal

A heuristic is consistent if for every node n,


every successor n' of n generated by any
action a, h(n) c(n,a,n') + h(n')

If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
g(n) + h(n) = f(n)
i.e., f(n) is non-decreasing along any path.

Theorem: If h(n) is consistent, A* using


GRAPH-SEARCH is optimal

The following figure shows a portion of a partially expanded


search tree. Each arc between nodes is labeled with the cost of
the corresponding operator, and the leaves are labeled with the
value of the heuristic function, h.

Which node (use the nodes letter) will be expanded next by each
of the following search algorithms?
A
h=20
(a) Depth-first search

h=14 B

(b) Breadth-first search


(c) Uniform-cost search
(d) Greedy search
(e) A* search

19

C
5

h=10

h=12

G
h=8

D
h=18

H
h=10

h=15

Search
DFS

BFS

Uniform
Cost
g(n)

Depth Iterative
Limited Deepening

BMA*

BestFS
f(n)

Greedy
f(n) =h(n)

A*
f(n)=g(n)+h(n)

cf: Animated Search Algorithms at


http://www.cs.rmit.edu.au/AI-Search/Product/
* British Museum Algorithm (i.e. Exhaustive Search)

Das könnte Ihnen auch gefallen