Graph searching Flashcards
What do we mean by ‘optimal’, ‘complete’ in graph searching?
- Optimal: The strategy guarantees that the best solution is found.
- Complete: The strategy guarantees that a solution is found.
Differentiate:
- Uniform cost search
- Greedy search
- A* serach
- Uniform cost search is uninformed search - it does not estimate distance between current node and goal. It instead chooses to expand unexplored nodes with smallest path cost.
- Greedy serach is informed. It makes use of heuristic function h(n), and expands node with smallest h, i.e. approx. closest to goal.
- A* combines path cost and heuristic function. (i.e. evaluation function is the path cost from root to goal through n)
What is an admissible heuristic? Why is it useful?
An admissible heuristic h(n) is one that never overestimates the cost of the best path from n to a goal. If h(n) is admissible then tree-search A* is optimal.
What does it mean for a heuristic h(n) to be monotonic?
If it is always the case that f(n′) >= f(n) then h(n) is called monotonic.
What is an IDA* search?
Iterative deepening search used depth-first search with a limit on depth that is gradually increased. IDA⋆ does the same thing with a limit on f cost.
The function contour searches from a given node, as far as the specified f limit. It returns either a solution, or the next biggest value of f to try.
What is RBFS?
Idea is to try doing a best-first search, but only use linear space. This can be done by a modified depth-first search:
- We remember the f(n’) for the best alternative node n’ we’ve seen so far on the way to the node n we’re currently considering.
- If n has f(n) > f(n’), go back and serach alternative. As we retrace our steps we replace the f cost of every node we’ve seen in the current path with f(n). (Means of remembering how good a discarded path is)
What is local search?
In what situations is local search useful?
- Instead of trying to find a path from start state to goal, we explore the local area of the graph, meaning those nodes one edge away from the one we’re at.
- We have an evaluation function f(n) that determines the quality of node n. Aim is to find an n such that f(n) is maximized.
- Good when path to goal is not important, and there is a nice way of measuring quality of node, e.g. 8-Queens problem.
Describe hill climbing search.
- Generate a start state n.
- Generate the N neighbours {n_1,…,n_N} of n.
- if (max{f(n_i)} <= f(n)) return n.
- Set n = n_i maximizing f(n_i).
- Goto 2.
What are the potential obstacles that prevents you from finding global maximum in local searches?
What can you do to overcome them?
Basically there are two major obstacles:
- Local maxima: Small peak that traps node
- Shoulders/Plateaus: Locally flat surfaces
There are many ways of avoiding these obstacles:
- Stochastic hill-climbing. Chooses better neighbours with probability proportional to f-value.
- First choice: Generates neighbours at random, and pick first one that is greater than current node.
- Random restart: Run a procedure n times, with a time limit on each run.
- Simulated annealing: Similar to stochastic hill-climbing, but starts with higher variation, and reduce variation over time.
- Beam search: Maintain k nodes at any given time. At each search step, find neighbours of each, and get the best k from all successors.
Describe the minimax algorithm, and discuss about its disadvantages.
- Generate the complete tree and label the leaves according to the utility function.
- Working from the leaves of the tree upward, label the nodes depending on whether Max or Min is to move.
- If Min is to move label the current node with the minimum utility of any descendant.
- If Max is to move label the current node with the maximum utility of any descendant.
We need to generate whole tree for minimax algorithm, which is computationally infeasible in practice.
What exactly is alpha-beta pruning?
A depth-first search which tries to optimize by ignoring partial nodes which don’t contribute to minimax calculation.
We look at a path through root, and record alpha, beta, where
- α = the highest utility seen so far on the path for Max
- β = the lowest utility seen so far on the path for Min
Initial values for α and β are (-\infty, \infty).