Graph searching Flashcards

1
Q

What do we mean by ‘optimal’, ‘complete’ in graph searching?

A
  • Optimal: The strategy guarantees that the best solution is found.
  • Complete: The strategy guarantees that a solution is found.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Differentiate:

  • Uniform cost search
  • Greedy search
  • A* serach
A
  • Uniform cost search is uninformed search - it does not estimate distance between current node and goal. It instead chooses to expand unexplored nodes with smallest path cost.
  • Greedy serach is informed. It makes use of heuristic function h(n), and expands node with smallest h, i.e. approx. closest to goal.
  • A* combines path cost and heuristic function. (i.e. evaluation function is the path cost from root to goal through n)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an admissible heuristic? Why is it useful?

A

An admissible heuristic h(n) is one that never overestimates the cost of the best path from n to a goal. If h(n) is admissible then tree-search A* is optimal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does it mean for a heuristic h(n) to be monotonic?

A

If it is always the case that f(n′) >= f(n) then h(n) is called monotonic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an IDA* search?

A

Iterative deepening search used depth-first search with a limit on depth that is gradually increased. IDA⋆ does the same thing with a limit on f cost.

The function contour searches from a given node, as far as the specified f limit. It returns either a solution, or the next biggest value of f to try.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is RBFS?

A

Idea is to try doing a best-first search, but only use linear space. This can be done by a modified depth-first search:

  1. We remember the f(n’) for the best alternative node n’ we’ve seen so far on the way to the node n we’re currently considering.
  2. If n has f(n) > f(n’), go back and serach alternative. As we retrace our steps we replace the f cost of every node we’ve seen in the current path with f(n). (Means of remembering how good a discarded path is)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is local search?

In what situations is local search useful?

A
  • Instead of trying to find a path from start state to goal, we explore the local area of the graph, meaning those nodes one edge away from the one we’re at.
  • We have an evaluation function f(n) that determines the quality of node n. Aim is to find an n such that f(n) is maximized.
  • Good when path to goal is not important, and there is a nice way of measuring quality of node, e.g. 8-Queens problem.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe hill climbing search.

A
  1. Generate a start state n.
  2. Generate the N neighbours {n_1,…,n_N} of n.
  3. if (max{f(n_i)} <= f(n)) return n.
  4. Set n = n_i maximizing f(n_i).
  5. Goto 2.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the potential obstacles that prevents you from finding global maximum in local searches?

What can you do to overcome them?

A

Basically there are two major obstacles:

  1. Local maxima: Small peak that traps node
  2. Shoulders/Plateaus: Locally flat surfaces

There are many ways of avoiding these obstacles:

  1. Stochastic hill-climbing. Chooses better neighbours with probability proportional to f-value.
  2. First choice: Generates neighbours at random, and pick first one that is greater than current node.
  3. Random restart: Run a procedure n times, with a time limit on each run.
  4. Simulated annealing: Similar to stochastic hill-climbing, but starts with higher variation, and reduce variation over time.
  5. Beam search: Maintain k nodes at any given time. At each search step, find neighbours of each, and get the best k from all successors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the minimax algorithm, and discuss about its disadvantages.

A
  • Generate the complete tree and label the leaves according to the utility function.
  • Working from the leaves of the tree upward, label the nodes depending on whether Max or Min is to move.
  • If Min is to move label the current node with the minimum utility of any descendant.
  • If Max is to move label the current node with the maximum utility of any descendant.

We need to generate whole tree for minimax algorithm, which is computationally infeasible in practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What exactly is alpha-beta pruning?

A

A depth-first search which tries to optimize by ignoring partial nodes which don’t contribute to minimax calculation.

We look at a path through root, and record alpha, beta, where

  • α = the highest utility seen so far on the path for Max
  • β = the lowest utility seen so far on the path for Min

Initial values for α and β are (-\infty, \infty).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly