Local Search Flashcards

1
Q

What does local search return?

A

A state, not a path like normal search algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are stopping criteria for local search?

A

Heuristic value, goal test and number of runs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is iterative best improvement strategy?

A

A local search strategy that always selects a successor that minimises a heursitic function which is typically the loss or gain. It performs greedy choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is hill climbing?

A

Maximising a goal value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is greedy descent?

A

Minimising a goal value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does hill climbing work?

A

Start wherever and iteratively move to the next best neighbouring state. Quit when no better nieghbours exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the drawbacks of hill-climbing?

A

Depending on inital state, can get stuck in a local maxima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is hill climbing with sideways moves?

A

Uses the same procedure as hill climbing, except when there are no uphill successors, the algorithm moves sideways. Introduce a parameter m so that only m sideways moves can be performed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does sideways moves perform on n-queens problem.

A

With no sideways moves, the algorithm succeeds only 14% of the time. With sideways moves, the algorithm succeeds 94% of the time, but takes more moves to find a solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is enforced hill climbing?

A

Performs breadth-first search from a local optima to find the next state with better h function. Good at escaping local optima.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is tabu search?

A

It maintains a fixed-length queue of the most recent states. IT never makes a step to a state that is on the tabu list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is stochastic search?

A

Local search strategies where randomization plays a prominent role.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is random-restart hill climbing?

A

Series of hill climbing searches with random restarts. If stuck or taking too long, restart from a random starting candidate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to calculate the expected number of runs of random-restart hill climbing?

A

Suppose a single attempt of greedy descent has a success rate of p. Then the expected number of tries is 1/p.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is first-choice hill climbing?

A

An implementation of stochastic hill climbing. Generating successors randomly until it finds one that is uphill. Doesn’t waste time generating all successors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is stochastic hill climbing?

A

Can randomly select amongst the uphill moves. The selection probability can vary with the steepness of the uphill move. Can still get stuck in local maxima.

17
Q

What is random walk hill climbing?

A

Assumes that adding randomisation may avoid the search getting stuck in local optima.
1. pick a parameter (walk probability)
2. At every step, with probability p, make an uninformed random walk. With probability 1-p, make a greedy choice.
This search strategy is PAC.

18
Q

What is PAC (probabilistic approximately complete)?

A

A search strategy is probabilistic approximately complete is the probability that a try fails to find an optimal solution can be made arbitrarily small when the search runs for sufficiently long.

19
Q

What is probabilistic hill-climbing?

A

It allows worsening search steps with a probability that depends on the respective deterioration in evaluation function value.

20
Q

How does annealing work?

A

As the algorithms proceed, the probability of accepting a worse successor. The temperature parameter determines the probability of accepting a worse variable. A high temperature means the probability of a locally bad move is higher. A low temperature is when the probability of locally bad move is lower. Typically the algorithm starts at a high temperature and exponentially decreases.

21
Q

What are the properties of simulated annealing?

A

If t decreased slowly enough, the algorithm will converge to an optimal state. This gives us a theoretical guarantee. Convergence may take a very very long time.

22
Q

What is gradient descent?

A

Used on continuous functions where we want to minimize over continuous variables. Compute the gradients for all directions. Take a small step downhill in the direction of the chosen gradient. Repeat.