informed search algorithms (slides4) Flashcards
Local search algorithms
in many optimization problems, the path is irrelevant and the goal state is all that matters
- state space = set of “complete” configurations
- find configurations satisfying constraints e.g.,
-local search keep a single “current” state and try to improve it
Hill climbing search
kind of like greedy local search
picks a neighbor with the highest value
usually chooses at random among neighbors with maximum value
Problem: depending on initial state, can get stuck on local maxima
hill climbing search implmentation
function Hill-climbing(problem) reeturns a state that is a local maximum
inputs: problem, a problem
local variables: current: a node
neighbor, a node
current<– neighbor
Hill Climbing variants
Stochastic hill climbing
first-choice hill climbing
random-restart hill climbing
stochastic hill climbing
choose at random from the set of all improving neighbors
first choice hill climbing
jumps to the first improving neighbor found
random restart hill climbing
series of hill climbing runs until a goal is found
simulated annealing search
escape local maxima by allowing some “bad” moves but gradually decrease their frequency
simulated annealing search implementation
function simulated-annealing(problem, schedule) returns a solution state
- inputs:
- -problem, a problem
- -schedule, a mapping from time to “temperature”
- local variables:
- -current: a node
- -next: a node
- -T, a “temperature controlling probe of downard steps
current 0 then current <– next only w/ probability e^((del) E/T)
HIgher temp makes higher probability for non-improving move
simulated annealing search properties
If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1
Local beam search
keep track of k states rather than just one
start with k randomly generated states
at each iteration all the successors of all k states are generated
if any one is a goal state, stop; else select the k best successors from the complete list and repeat(they could be all successors from the same node)
Genetic algorithms
- A successor state is generated by combining two parent states
- start with k randomly generated states
-state represented as a string over a finite alphabet (often 0s and 1s or digits)
-evaluation function (fitness function). higher values for better states
produce the next generation of states by selection, crossover, and mutation