informed search algorithms (slides4) Flashcards

1
Q

Local search algorithms

A

in many optimization problems, the path is irrelevant and the goal state is all that matters

  • state space = set of “complete” configurations
  • find configurations satisfying constraints e.g.,

-local search keep a single “current” state and try to improve it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Hill climbing search

A

kind of like greedy local search
picks a neighbor with the highest value
usually chooses at random among neighbors with maximum value
Problem: depending on initial state, can get stuck on local maxima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

hill climbing search implmentation

A

function Hill-climbing(problem) reeturns a state that is a local maximum
inputs: problem, a problem
local variables: current: a node
neighbor, a node

current<– neighbor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hill Climbing variants

A

Stochastic hill climbing
first-choice hill climbing
random-restart hill climbing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

stochastic hill climbing

A

choose at random from the set of all improving neighbors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

first choice hill climbing

A

jumps to the first improving neighbor found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

random restart hill climbing

A

series of hill climbing runs until a goal is found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

simulated annealing search

A

escape local maxima by allowing some “bad” moves but gradually decrease their frequency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

simulated annealing search implementation

A

function simulated-annealing(problem, schedule) returns a solution state

  • inputs:
  • -problem, a problem
  • -schedule, a mapping from time to “temperature”
  • local variables:
  • -current: a node
  • -next: a node
  • -T, a “temperature controlling probe of downard steps

current 0 then current <– next only w/ probability e^((del) E/T)
HIgher temp makes higher probability for non-improving move

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

simulated annealing search properties

A

If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Local beam search

A

keep track of k states rather than just one

start with k randomly generated states

at each iteration all the successors of all k states are generated

if any one is a goal state, stop; else select the k best successors from the complete list and repeat(they could be all successors from the same node)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Genetic algorithms

A
  • A successor state is generated by combining two parent states
  • start with k randomly generated states

-state represented as a string over a finite alphabet (often 0s and 1s or digits)
-evaluation function (fitness function). higher values for better states
produce the next generation of states by selection, crossover, and mutation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly