Exam 2 Flashcards

Chapter 3 - Uninformed and Informed Searches

1
Q

What does a reflex agent do?

A

It chooses an action based on it’s current percept (memory), and doesn’t consider the consequences (which may make it irrational)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does a planning agent do?

A

It makes a decision based on an evaluation of future action sequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does a search algorithm assume?

A

A known deterministic transition model, discrete states and actions, full observability, and atomic representation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is goal formulation?

A

Based on the current situation and the agent’s performance measure, first step in problem solving, meant to limit objectives and hence the actions to be considered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is problem formulation?

A

The process of deciding what actions and states to consider, given a goal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What’s the difference between a search and an execution?

A

The search simply looks for a sequence of actions that reaches the goal until it’s found, while the execution is the actual doing of the actions in the solution, typically one at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does a search problem consist of?

A

A state space (S), initial state (S0), Actions (A(S)), transition model (Result(s,a)), goal test (G(S)), and action cost (c(s,a,s’))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What makes a solution optimal?

A

It has the lowest cost among all possible solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What makes an open-loop system?

A

The agent ignores percepts which breaks the loop between the agent and environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What makes a closed-loop system?

A

The agent monitors its percepts, since the environment may be nondeterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a state space graph?

A

A mathematical representation of a search problem with nodes being abstracted configurations and arcs representing transitions; Each state occurs only once

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does a search algorithm do?

A

It takes a search problem as an input and returns a solution (or indication of failure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a search tree?

A

Essentially a “what-if” tree of plans and outcomes with the root node being the start state and the rest of the nodes showing states (but correspond to plans that achieve these states)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Do we usually construct the entire state space graph or search tree?

A

No, it would take up too much memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the frontier of a search?

A

Set of all current leaf nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What separation does the frontier create in a state-space graph?

A

An interior region where every state has been expanded and an exterior region of states that have not yet been reached

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Are search trees and state-space graphs the same?

A

No, they are two different structures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How does a priority queue work in search algorithms?

A

It first pops the node with the minimum cost according to some evaluation function, f(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a cycle/loopy path?

A

A path that visits the same state/node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does a redundant path differ from a loopy path?

A

A redundant path has the same initial and final state in it’s path but with unnecessary states that increase the total path cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the 4 components of a node?

A

node.State, node.Parent, node.Action, node.Path-Cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the 4 ways that performance is evaluated?

A

Completeness, optimality, time complexity, and space complexity

23
Q

How is complexity measured?

A

Depth/number of actions in an optimal solution (d), maximum number of actions in any path (m), and branching factor/number of successors of a node that need to be considered (b)

24
Q

What makes a search algorithm uninformed?

A

It only uses information provided in problem formulation and only distinguishes goal states from non-goal states

25
Q

Describe breadth-first search (BFS)

A

Expands the shallowest node first, uses a FIFO queue frontier, goal test is performed when a node is generated (before the node is added to the frontier)

26
Q

What is the BFS performance?

A

Complete (d must be finite), optimal (if costs are equal, usually not), TC = O(b^d), SC = O(b^d)

27
Q

Describe uniform cost search (UCS)

A

Expands the node with the lowest path cost g(n), uses a priority queue with the priority being cost, goal test is performed when the node is popped

28
Q

What is the UCS performance?

A

Complete (assuming C, optimal solution cost, is finite and epsilon, minimum arc/action cost, > 0), optimal, TC = O(b^(C/epsilon)), SC = O(b^(C*/epsilon))

29
Q

Describe depth-first search (DFS)

A

Expands the deepest node in the current frontier, uses a LIFO queue frontier, goal test is done after popping a node from the frontier

30
Q

What is the DFS performance?

A

Complete (only with graph-searches), Not optimal (with either graphs or trees), TC = O(b^m), SC = O(bm)

31
Q

Describe depth-limited search (DLS)

A

Essentially DFS with a single predetermined depth limit l

32
Q

What is the DLS performance?

A

Incomplete (if l < d), not optimal (if l > d), TC = O(b^l), SC = O(bl)

33
Q

Describe iterative deepening search (IDS)

A

Essentially DLS but reiterates, starting with the lowest l and finishing to where l = d

34
Q

What is the IDS performance?

A

Complete (when b is finite), optimal (when path costs are identical), TC = O(b^d), SC = O(bd)

35
Q

Describe bidirectional search

A

Simultaneously searches forward from initial state and backwards from goal state, uses other preexisting search algorithm to do so. Solution is found when the two frontiers collide

36
Q

What is the bidirectional search performance?

A

Complete (if goal states are clear), optimal (if goal states are clear), TC = O(b^(d/2)), SC = O(b^(d/2))

37
Q

What is a heuristic?

A

A function that estimates how close a state, usually node n, is to a goal, h(n)

38
Q

What queue do heuristic searches usually use?

A

Priority queues where the priority is based on decreasing order of desirability

39
Q

Describe greedy search

A

f(n) = h(n), expands a node that seems to be closest to the goal

40
Q

What is the greedy search performance?

A

Incomplete (may get stuck in a loop, but is complete in finite state spaces), not optimal, TC = O(b^m), SC = O(b^m)

41
Q

Describe A* search

A

f(n) = g(n) + h(n), avoids expanding paths that are already expensive, combines UCS and greedy search advantages

42
Q

What is the A* search performance?

A

Complete, optimal, TC = O(b^m), SC = O(b^m)

43
Q

What does the explored set look like for informed algorithms?

A

Like hash tables, or Python’s dictionaries

44
Q

Describe IDA* search

A

Essentially A* with a reiterating aspect (such as how IDS belongs to DFS), doesn’t keep all reached states in memory, reiterates with the cutoff being the f-cost (g+h); each iteration, cutoff value is the smallest f-cost of any node (works as contours)

45
Q

Describe recursive best-first search (RBFS)

A

Uses only linear space, uses f-limit to keep track of the f-value of the best alternative path available from any ancestor of the current node; if current nude exceeds this limit, recursion unwinds back to the alternative path

46
Q

Describe simplified memory-bounded A* (SMA*)

A

Essentially A* with a memory limit; when/if the memory limit is reached, the oldest worst leaf node (highest f-value) is dropped and its f-value is backed up to its parent

47
Q

What makes a heuristic admissible?

A

If 0<=h(n)<=h(n), where h(n) is the true cost to the nearest goal and h(n) is just the estimated cost to the nearest goal

48
Q

What makes a heuristic consistent?

A

If h(n)<=c(n,a,n’)+h(n’), where n’ is the successor node of node n

49
Q

What’s the relation between consistency and admissibility of a heuristic?

A

All consistent heuristics are admissible but not vice versa

50
Q

How do search contours work?

A

Each contour is labeled with a value, in which every node has f(n) = g(n) + h(n) <= value

51
Q

Describe a graph search

A

Tree search with a set of expanded states, expand the search tree node-by-node but never expand a state twice

52
Q

What are the points of creating admissible heuristics?

A

Needed to solve hard search problems optimally, often are solutions to relaxed problems (the same problems with lesser constraints/assumptions)

53
Q

How can you tell if one heuristic is better than another?

A

If that one heuristic dominates the other: for example, h2(n) >= h1(n), h2(n) is the better/dominant heuristic