Local Search Flashcards
What does local search return?
A state, not a path like normal search algorithms
What are stopping criteria for local search?
Heuristic value, goal test and number of runs.
What is iterative best improvement strategy?
A local search strategy that always selects a successor that minimises a heursitic function which is typically the loss or gain. It performs greedy choice.
What is hill climbing?
Maximising a goal value.
What is greedy descent?
Minimising a goal value.
How does hill climbing work?
Start wherever and iteratively move to the next best neighbouring state. Quit when no better nieghbours exist.
What are the drawbacks of hill-climbing?
Depending on inital state, can get stuck in a local maxima
What is hill climbing with sideways moves?
Uses the same procedure as hill climbing, except when there are no uphill successors, the algorithm moves sideways. Introduce a parameter m so that only m sideways moves can be performed.
How does sideways moves perform on n-queens problem.
With no sideways moves, the algorithm succeeds only 14% of the time. With sideways moves, the algorithm succeeds 94% of the time, but takes more moves to find a solution.
What is enforced hill climbing?
Performs breadth-first search from a local optima to find the next state with better h function. Good at escaping local optima.
What is tabu search?
It maintains a fixed-length queue of the most recent states. IT never makes a step to a state that is on the tabu list.
What is stochastic search?
Local search strategies where randomization plays a prominent role.
What is random-restart hill climbing?
Series of hill climbing searches with random restarts. If stuck or taking too long, restart from a random starting candidate.
How to calculate the expected number of runs of random-restart hill climbing?
Suppose a single attempt of greedy descent has a success rate of p. Then the expected number of tries is 1/p.
What is first-choice hill climbing?
An implementation of stochastic hill climbing. Generating successors randomly until it finds one that is uphill. Doesn’t waste time generating all successors.
What is stochastic hill climbing?
Can randomly select amongst the uphill moves. The selection probability can vary with the steepness of the uphill move. Can still get stuck in local maxima.
What is random walk hill climbing?
Assumes that adding randomisation may avoid the search getting stuck in local optima.
1. pick a parameter (walk probability)
2. At every step, with probability p, make an uninformed random walk. With probability 1-p, make a greedy choice.
This search strategy is PAC.
What is PAC (probabilistic approximately complete)?
A search strategy is probabilistic approximately complete is the probability that a try fails to find an optimal solution can be made arbitrarily small when the search runs for sufficiently long.
What is probabilistic hill-climbing?
It allows worsening search steps with a probability that depends on the respective deterioration in evaluation function value.
How does annealing work?
As the algorithms proceed, the probability of accepting a worse successor. The temperature parameter determines the probability of accepting a worse variable. A high temperature means the probability of a locally bad move is higher. A low temperature is when the probability of locally bad move is lower. Typically the algorithm starts at a high temperature and exponentially decreases.
What are the properties of simulated annealing?
If t decreased slowly enough, the algorithm will converge to an optimal state. This gives us a theoretical guarantee. Convergence may take a very very long time.
What is gradient descent?
Used on continuous functions where we want to minimize over continuous variables. Compute the gradients for all directions. Take a small step downhill in the direction of the chosen gradient. Repeat.