Midterm Review Flashcards

1
Q

Which attributes are typically stored on the nodes of a search tree for uninformed search?

A

the state of the node
link to the parent node
the last action taken before moving to this state
the depth of the node
the path cost from the root to the current node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What additional information would you need to store for informed search than for uninformed search?

A

cost of the path to the goal node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the difference between tree-search and graph-search?

A

Tree search allows for repeated states, and graph search doesn’t

Graph search uses a closed list to keep track of explored nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What type of search do we use when we are searching for a path on a grid?

A

Graph search

Allowing repeated states makes it impossible to search a grid. Every time you would expand a child node, it would bring up the parent node, allowing you to explore it again. The parent node would then expand on the child node again and the process repeats because there is no limit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is dynamic programming?

A

algorithmic technique for solving an optimization problem by breaking it down into simpler subproblems and using the fact that the optimal solution depends on the optimal solutions to its subproblems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What type of search do we use when searching for a solution to the 8-queens problem?

A

Tree search

You want to list all possible queens that can be placed in a row. If you limit yourself based on the queens that have been looked at before, you make it difficult, or nearly impossible, to find a solution because you are excluding possible placements of queens.

https://www.codeproject.com/Articles/806248/Eight-Queens-Puzzle-Tree-Algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which type of search do we use when we are searching for a path between cities on a road network? Why?

A

Graph search

You don’t want to visit a city more than once when traveling between cities. It would make the path unnecessarily long

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the criteria used to compare search algorithms?

A

Time and space complexity, completeness, and optimality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the properties for BFS? Completeness, optimality, time and space complexity

A

Complete: if there is a finite number of nodes
optimal: Yes, finds shortest path to goal node
Time complexity: O(b^d), b = branching factor, d = depth of shallowest solution
Space complexity: O(b^d), b = branching factor, d = depth of shallowest solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the properties for DFS? Completeness, optimality, time and space complexity

A

Complete: if there is a finite number of nodes
optimal: No, could return the longest path and there is a shorter one out there
Time complexity: O(b^m), b = branching factor, m = max depth of tree
Space complexity: O(bm), b = branching factor, m = depth of shallowest solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the properties for Iterative-deepening depth first search? Completeness, optimality, time and space complexity

A

Complete: yes
optimal: Yes, if all edges have the same cost
Time complexity: O(b^d), b = branching factor, d = depth of shallowest goal node
Space complexity: O(bm), b = branching factor, m = depth of tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the properties for uniform cost search? Completeness, optimality, time and space complexity

A

Complete: yes
optimal: yes
Time complexity: O(b^([C/ε] + 1)), b = branching factor, d = max depth of tree
Space complexity: O(b^([C
/ε]+1)), b = branching factor, d = depth of shallowest solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the properties for bidirectional search? Completeness, optimality, time and space complexity

A

Complete: yes
optimal: Yes, if the edges have the same cost
Time complexity: O(b^(d/2)), b = branching factor, d = depth of shallowest solution
Space complexity: O(b^(d/2)), b = branching factor, d = depth of shallowest solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which uninformed search technique has the best time and space complexity? What is the time and space complexity?

A

Best time complexity: bfs

Best space: dfs, iterative

Best both: iterative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which algorithm would you use if you knew that the depth of the search tree is bounded and why?

A

Iterative deepening
Better state space than bfs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Prove that uniform-cost search and breadth-first search with constant edge costs are optimal when used in GRAPH-SEARCH.

A

https://www.geeksforgeeks.org/breadth-first-search-is-a-special-case-of-uniform-cost-search/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Show a state space with varying step costs in which a GRAPH-SEARCH approach that uses depth-first (or iterative-deepening depth first) finds a suboptimal solution.

A

tbd

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Under which circumstances can you apply bidirectional search?

A

When the start and goal state are known

When you can inverse the problem, which can be hard, especially if you have a bunch of different goal states (e.g. chess)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You need an additional data structure in order to be able to solve GRAPH-SEARCH problems. How is this data structure called and how can you implement it?

A

A closed list (e.g. hash table)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

True/False and Why: Iterative deepening search is preferred over breadth-first search.

A

True
Has space complexity and completeness of DFS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

True/False and Why: Bidirectional search is preferred over breadth-first search.

A

False
Only preferred in really specific situations
might be ideal in situations where the goal state is known and it’s easy to inverse the problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the name of the overall search strategy that selects the fringe node which maximizes an evaluation function ƒ (n)? Name two approaches that implement this search strategy? How do they define the evaluation function? What does a heuristic represent?

A

Best-first search

Two approaches:
A*: f(n) = g(n) + h(n), where
f(n) = overall value/cost of node
g(n) = true cost of the path from the root node to the current node
h(n): heuristic value of node

Greedy best-first search: f(n) = h(n)
h(n): heuristic value of node; estimated cost from current to goal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the properties of greedy best-first search? How does it work?

A

Complete? No, no guarantee that you won’t go down an infinite path
Time complexity: O(b^m) where m = max depth of tree
Space complexity: O(b^m) - algorithm may have to remember all of the leaf nodes at level m
Optimal? No, not even complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the properties of A* in TREE-SEARCH? How does it work?

A

Repeated states are allowed

For A* to be optimal, must use an admissible (optimistic) heuristic
h(n) <= C*(n) for all n

Works like regular A*, just allows for repeated states (using a tree to explore paths)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are the properties of A* in GRAPH-SEARCH? How does it work?

A

Repeated states are not allowed –> closed list
Preferred approach
admissible heuristic is not okay; we need a consistent heuristic

26
Q

What is an admissible heuristic?

A

h(n) never overestimates the true path cost C*(n)

they are always optimistic

27
Q

What is a consistent heuristic

A

if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor

A heuristic is consistent if, when going from neighboring nodes a to b, the heuristic difference/step cost never overestimates the actual step cost

Returns the same output given the same input

a consistent heuristic is also admissible

28
Q

T/F: Greedy Best-First search with h(n) = 0 for all nodes n is guaranteed to find an optimal solution if all arc costs are 1.

A

Arc cost: an edge cost from one node to another

False; won’t be able to tell which path is the best –> could take a long path

29
Q

In a finite search space containing no goal state, the A* algorithm will always explore all states.

A

True

30
Q

Prove the following statement: “If G2 is a node that corresponds to a goal state but a suboptimal path, A∗ will select a reachable node n along the optimal path before selecting G2 for expansion”.

A

Lemma 1.1

A* will prioritize nodes on the optimal path

Assume C* is the cost from the root to the goal on the true optimal path

f(G2) = g(G2) + h(G2) = g(G2) + 0 > C*

->h(G2) = 0 because G2 is a goal node
->The path cost from root g(G2) must be greater than the optimum cos to Goal C* because G2 is a goal node connected to the root via a suboptimal path
-> f(n) = g(n) + h(n) <= C*

conclusion: f(n) <= C* < f(G2)
Thus proving n will be selected before G2

31
Q

Prove that A∗ always finds the optimal path in TREE-SEARCH if the heuristic is admissible.

A

As long as there are reachable nodes along the optimal
path, A* with an admissible heuristic will always prefer to expand them before expanding a suboptimal goal node.
Consequently, the first goal node reached by A* is the goal node along the optimal path.

32
Q

Is the “Manhattan distance” an admissible heuristic when planning on a grid? Is it true that the “Manhattan distance” is an admissible heuristic for finding the shortest path through the real Manhattan in New York City? [Hint: Go to Google maps and search for: “Broadway & W 33rd St, New York, 10001”. Consider the distance between Madison Square Park and Times square.]

A

Yes

Yes it is because roads are like grids/edges. You can’t walk in through walls to shorten the distance. You’re moving on x and y axis of a vector

33
Q

What is the Manhattan distance?

A

Manhattan distance: Think of it as the side of the triangle, not the hypotenuse (x and y components, not the diagonal)

34
Q

Prove a consistent heuristic is also admissible

A

Must prove h(n) <= C(n) for all n where C is the true optimal cost from node n to the goal. Do this via induction

Base case: For a goal state ng, we have that h(ng) <= C(ng) since h(ng) = 0 and C(ng) = 0

This is where that triangle thing comes into play (theorem 1.3)

35
Q

Show that if a heuristic h(n) is consistent, then the evaluation function used by the A* algorithm ƒ (n) = h(n) + g(n) is non-decreasing along any search path

A

Assume n’ is a successor node of n:
-> g(n’) = g(n) + c(n, a, n’)

-> f(n’) = g(n’) + h(n’) = g(n) + c(n, a, n’) + h(n’)

Since c(n, a, n’) + h(n’) >= h(n) (consistent heuristic)…
f(n’) >= g(n) + h(n) = f(n)
f(n’) >= f(n)
This means that the evaluation function always increases as we traverse towards the goal

36
Q

Show that A* expands nodes with a non-decreasing order of evaluation function.

A

At each iteration of A, the algorithm will pick the node
n that minimizes f(n). All the remaining nodes on the tree will have greater values that at least one node in the
current fringe and they will be selected after that node for expansion. Consequently, A
always expands nodes in a
non-decreasing order of f(n).

37
Q

Show that the fist goal node expanded by the A* algorithm also corresponds to the optimal goal node

A

Since A* expands nodes in non-decreasing order of the evaluation function, the first goal that will be expanded is the goal node with the minimum evaluation function, and thus the one that
corresponds to the optimal path. Suboptimal goal nodes will be at least as expensive as the optimal one in terms of
their evaluation function.

38
Q

Show that A* expands all nodes with ƒ (n) < C∗, where C∗ the optimal solution cost.

A

At some point in A*, the (optimal) goal node is expanded

evaluation function of goal node: f(ng) = C(ng) = C

Since A* expands nodes in non-decreasing order, then all the nodes with f(n) <= f(ng) = C* will be expanded

Consequence: the closer the heuristic is to C*, the fewer nodes that will be expanded -> more efficient search

39
Q

Under which circumstances is the A* algorithm complete?

A

if the number of nodes is finite

branching factor is finite and the minimum edge cost is greater that epsilon > 0

40
Q

Which optimal informed search algorithm expands fewer nodes than A* for the same heuristic? Ignore the effects of ties.

A

A* expands exactly this set

41
Q

Under which conditions will A* search expand the same nodes as Breadth-First search?

A

if no goal is found

when the optimal goal node in A* is the first node expanded and lies on the first level of the search tree?

42
Q

Which of the following are admissible, given admissible heuristics h1, h2?

a) h(n) = min{h1(n), h2(n)}
b) h(n) = h1(n) + (1 − w )h2(n), where 0 ≤ w ≤ 1
c) h(n) = max {h1(n), h2(n)}

A

?

43
Q

Which of the following are consistent, given consistent heuristics h1, h2?

A

?

44
Q

What is the definition of the effective branching factor?

A

way to evaluate the number of nodes expanded by a search technique

If an algorithm has visited N nodes, and the solution was at depth d, then b* is the branching factor of the complete tree of length d with N nodes

N+1 = 1 + b* + (b)^2 + … + (b)^d

want b* to be 1 –> no redundant nodes expanded

45
Q

how is the effective branching factor used to compare heuristics?

A

Given two heuristics h1 and h2 where h2(n) > h1(n) for all n

-> h2 dominates h1 -> h2 has a lower effective branching factor

46
Q

Suggest possible directions for automatically designing heuristics.

A

Using subproblems: creating a simpler version of the problem (e.g. instead of looking at 64 tiles, look at only 4 and how to work with them properly)

Combining multiple heuristics: having heuristics for subproblems and then choosing the dominant heuristic at each state

47
Q

Give examples of relaxed versions of the 15-puzzle?

A

tbd

48
Q

What is the definition of a dominant heuristic?

A

When all of its values must be greater than or equal to the corresponding values of the other heuristic

49
Q

What is the definition of a two-player zero sum game?

A

deterministic, turn-taking, two-player problems where the utility values of the two agents at the end of the game are always equal and opposite

50
Q

What does it mean to be deterministic?

A

will always give the same outcome given the same input

51
Q

What does it mean to be stochastic?

A

may give different outcomes for the same input

certain levels of unpredictability or randomness

think of as a random event

52
Q

What does it mean to be probabilistic?

A

Derived from probability

53
Q

What are the differences between a search tree used for adversarial search versus a tree for classical single-agent search problems?

A

No looking just for the sequence of actions that lead to a goal state; must take into account the opponent’s actions

54
Q

Describe the operation of the minimax algorithm. What properties does it have in terms of time and space complexity?

A

Uses DFS

Time: O(b^m) where m = max depth of tree
Space O(bm)

55
Q

Why do we use depth-first search to solve adversarial search problems?

A

Look at leaf nodes to figure out how you arrived there

56
Q

Describe the operation of alpha-beta pruning. What is the time complexity assuming perfect ordering? What is the time complexity on average?

A

Time (perfect): O(b^[m/2])
Time (avg): O(b^[3m/4])

57
Q

You might be provided a search tree that corresponds to a game and asked to identify which nodes will not be checked by the minimax algorithm if we apply alpha-beta pruning. You might also be asked to compute the value at each node as computed by the minimax algorithm. You might be asked if a different ordering of the terminal nodes would result in increased pruning.

A
58
Q

T/F: A player using minimax is guaranteed to play optimally against any opponent.

A

F

59
Q

T/F: A player using alpha-beta pruning is guaranteed to play optimally against an optimal opponent.

A

True

Caution: our opponent is playing optimally → we’ll play optimally
If opponent is not optimal → no guarantee we’re playing optimally too
How ur opponent plays = important

60
Q

Do you know any other way to accelerate the performance of adversarial search other than alpha-beta pruning?

A

Add heuristic function which evaluates at a certain depth

61
Q

What requirements should a heuristic function satisfy in heuristic adversarial search?

A