Sie sind auf Seite 1von 8

Artificial Intelligence

Informed Search and Exploration

Searching: So Far
Weve discussed how to build goal-based and utility-based agents that search to solve problems

Chapter 4 (4.3 4.6)

Weve also presented both uninformed (or blind) and informed (or heuristic) approaches for search What weve covered so far explore search space systematically, which could enumerate the entire state space before finding a solution

Last Time: Search Strategies


Uninformed: Use only information available in the problem formulation

This Time
Local search algorithms Hill climbing Simulated annealing Genetic algorithm

Breadth-first Uniform-cost Depth-first Depth-limited Iterative deepening

Informed: Use heuristics to guide the search

Best first:

Greedy search A* search

Local Search Algorithms


Other search strategies, can use local search algorithms: keep a single current state, and try to improve it, in order to solve the problem at hand
The path is irrelevant; the goal state itself is the solution. We do not necessarily have a designated start state

Optimization
Problems where we search through complete solutions to find the best solution are often referred to as optimization problems Most optimization tasks belong to a class of computational problems called NP
Non-deterministic Polynomial time solvable For NP problems, state spaces are usually exponential, so systematic search methods are not time or space efficient

The objective is to search through the problem space to find other solutions that are better, the best, or that meet certain criteria (goal)

Optimization Problems
As it turns out, many real-world problems that we might want an agent to solve are similarly hard optimization problems:
Bin-packing - The goal is to pack a collection of objects into the minimum number of fixed-size "bins" Logistics planning VLSI layout/circuit design Theorem-proving Navigation/routing Production scheduling Learning the parameters for a neural network

Optimization Problems
For optimization problems, there is a well-defined objective function that we are trying to optimize In addition to finding goals, local search algorithms are useful for solving pure optimization problems, in which the aim is to find the best state according to an objective function

Local Search
Local search is a type of greedy, complete search that focuses on a specific (or local) part of the search space, rather than trying to branch out into all of it We only consider the neighborhood of the current state rather than the entire state space (multiple paths)

Local search
Consider the state space landscape A landscape has both "location" (defined by the state) and "elevation" (defined by the value of the heuristic cost function or objective function)
If elevation corresponds to cost, then the aim is to find the lowest valley

a global minimum

If elevation corresponds to an objective function, then the aim is to find the highest peak a global maximum

(You can convert from one to the other just by inserting a minus sign.)

Local search
Local search algorithms explore this landscape A complete local search algorithm always finds a goal if one exists An optimal algorithm always finds a global minimum/maximum

Hill-Climbing (HC)
The most common local search strategy is called hillclimbing, if the task is to maximize the objective function
Called gradient ascent if we are maximizing else Called gradient descent if we are minimizing

We consider all the successors of the current node, expand the best one, and throw the rest away

Hill-Climbing
The hill-climbing search algorithm is simply a loop that continually moves in the direction of increasing value that is, uphill It terminates when it reaches a "peak" where no neighbor has a higher value The algorithm does not maintain a search tree, so the current node data structure need only record the state and its objective function value

Hill-Climbing
Hill-climbing does not look ahead beyond the immediate neighbors of the current state Hill climbing is sometimes called greedy local search Perform quite well Hill climbing often makes very rapid progress towards a solution, because it is usually quite easy to improve a bad state

Hill-Climbing
Example: Hill-climbing search 1. Pick a solution from the search space and evaluate it. Define this as the current solution. 2. Generate a new solution by applying a transformation to the current solution and evaluate it. 3. If the new solution is better than the current then make it the new current solution; otherwise discard it. 4. Repeat steps 2 and 3 until there are no more possible transformations.

Hill-Climbing Issues
Often gets stuck for the following reasons:
Local maxima Ridges Plateau (...sideways moves...

Objective Surfaces
The objective surface is a plot of the objective functions landscape The various levels of optimality can be seen on the objective surface

Objective Surface

Escaping Local Optima


Local optima are OK, but sometimes we want to find the absolute best solution There are several ways we can try to avoid local optima and find more globally optimal solutions:
Random Restarting Simulated Annealing

Random Restarting
If at first you dont succeed, try, try again!
The idea here is to run the standard HC search algorithm several times, each with different, randomized initial states Since HC is a local search strategy, trying multiple initial states allows us to locally explore a wider range of the search space

Random Restarting

Random Restarting
It turns out that, if each HC run has a probability p of success, the number of restarts needed is approximately 1/p For example, with 8-Queens, there is a probability of success p 0.14 1/0.14 So, on average, we would need only 7 randomly initialized trails of the basic HC search to find a solution

If we pick lucky initial states, we can find the global optimum!

Simulated Annealing (SA)


In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them, thus allowing the material to coalesce into a low-energy crystalline state Let us consider to switch from HC to gradient descent (minimizing the cost)

Simulated Annealing
Builds on an analogy with thermodynamics. The Boltzmann probability distribution describes the relative probabilities of finding a system in different states as a function of temperature

Simulated Annealing
According to thermodynamics, to grow a crystal:
Start by heating a row of materials in a molten state The crystal melt is cooled If the temperature is reduced too quickly, irregularities occur and it does not reach its ground state

Simulated Annealing
Imagine the task of getting a ping pong ball into the deepest crevice in a bumpy surface If we let the ball roll come to rest in a local minimum The simulated annealing solution is to start shaking hard (i.e. at a high temperature) and then gradually reduce the intensity of the shaking (i.e. lower the temperature)

By analogy, SA relies on a good cooling schedule, which maps the current time to a temperature T, to find the optimal solution
Usually exponential Can be very difficult to devise

Simulated Annealing
In some way, the simulated-annealing algorithm is similar to hill climbing. In stead of picking the best move, it picks a random move If improve the situation always accepted
Else it accepts the situation with some probability less then 1

Simulated Annealing1)
Requirements for simulated annealing:
A description of possible system states (representation). A generator of random changes in the configuration (search operator). An evaluation function E (analog of energy) for minimization. A parameter T (analog of temperature) and an annealing schedule.

The probability decreases exponentially with the badness of the move the amount the evaluation is worsened The probability also decreases as the temperature T goes down

Genetic Algorithms
So far the optimization strategies weve discussed search for a single solution, one state at a time within a neighborhood Genetic algorithms (GAs) are a unique search approach that maintains a population of states, or individuals, which

Evolutionary Analogy
Consider a population of rabbits:
Some individuals are faster and smarter than others Slower, dumber rabbits are likely to be caught and eaten by foxes Fast, smart rabbits survive to do what rabbits to best: make

more rabbits!!

evolves
Also called evolutionary search

Evolutionary Analogy
The rabbits that survive breed with each other to generate offspring, which starts to mix up their genetic material
Fast rabbits might breed with fast rabbits Fast rabbits with slow rabbits Smart with not-so-smart, etc

Evolutionary Analogy
In this analogy, an individual rabbit represents a solution to the problem (i.e. a single point in the state space)
The state description is its DNA, if you will

The foxes represent the problem constraints


Solutions that do well are likely to survive

Furthermore, nature occasionally throws in a wild hare because genes can mutate

What we need to create are notions of natural selection, reproduction, and mutation

Core Elements of GAs


For selection, we use a fitness function to rank the individuals of the population For reproduction, we define a crossover operator which takes state descriptions of individuals and combines them to create new ones For mutation, we can merely choose individuals in the population and alter part of its state

Genetic Algorithm Example


POP = initialPopulation // build a new population repeat { // with every generation NEW_POP = empty for I = 1 to POP_SIZE { X = fit individual // natural selection Y = fit individual CHILD = crossover(X,Y) // reproduction if small random probability then mutate(CHILD) // mutation add CHILD to NEW_POP } POP = NEW_POP } until solution found or enough time elapsed return most fit individual in POP

Genetic Algorithm Example


The previous algorithm completely replaces the population for each new generation but we can allow individuals from older generations to live on Reproduction here is only between two parents (as in nature), but we can allow for more!! The population size also is fixed but we can have this vary from one generation to the next

Selection
Selection (either to reproduce or live on) from one generation to the next relies on the fitness function We can usually think of the fitness function as being a heuristic, or the objective function We want to apply pressure that good solutions survive and bad solutions die
Too much and we converge to sub-optimal solutions Too little and we dont make much progress

Selection
Deterministic selection
1. Rank all the individuals using the fitness function and choose the best k to survive 2. Replace the rest with offspring Can lead fast convergence (and local optima)

Reproduction
The unique thing about GAs is the ability of solutions to inherit properties from other solutions in the population The basic way to perform a crossover operation is to splice together parts of the state description from each parent

Stochastic selection
Randomly choice of the k to survive Slower to converge and could lose good solutions

Reproduction
There are many different ways to choose crossover point(s) for reproduction:
Single-point: choose the center, or some optimal point in the state description take the first half of one parent, the second half of the other Random: choose the split point randomly (or proportional to the parents fitness scores) Uniform: choose each element of the state description independently, at random (or proportional to fitness) Ect...

Mutation
There are also a variety of ways to mutate individuals in the population The first question to consider it who to mutate
Alter the most fit? Least fit? Random? Mutate children only, or surviving parents as well? How many to mutate?

The second question is how to mutate


Totally arbitrarily? Mutate to a better neighbor?

GAs and Creativity


The objectives of the GA search:
The population as a whole is trying to converge to an optimal solution

Genetic Algorithm Example


Constructing a jumbo jet (Cannot guarantee optimal solution):
The shape of the wings Be able to fly Fuel-efficient Limited size Weight Stable Intact Etc...

Because solutions can evolve from a variety of factors, without prodding from us as to which direction to go (as in local search), very novel problem solutions can be found/discovered

References
1) Roger Eriksson, Department of Computing Science, College Skvde, Sweden

Summary
Iterative improvement algorithms keep only a single state in memory. Can get stuck in local extrema/optima Local search methods are more appropriate for solving complete search and optimization problems
State spaces can be prohibitively large The goal is different than with systematic search strategies

Summary
There are several effective ways of escaping local optima for local searching, which exploit different properties:
Random restarting tries several times from different parts of the search space Simulated annealing allows for a variety of moves by searching stochastically (it is complete and optimal given a slow enough cooling schedule)

Next Time!

Game Playing!
Section 6.1-6.4

Local beam search keeps track of k states rather than just one, Genetic algorithm is a variant of stochastic local beam search

Das könnte Ihnen auch gefallen