SEARCH Flashcards

1
Q

An entity that perceives its environment and acts upon that environment.

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A configuration of an agent in its environment

A

State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The state from which the search algorithm starts

A

Initial State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Choices that can be made in a state.

A

Actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A description of what state results from performing any applicable action in any state

A

Transition Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The set of all states reachable from the initial state by any sequence of actions.

A

State Space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The condition that determines whether a given state is a goal state

A

Goal Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A numerical cost associated with a given path

A

Path Cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A sequence of actions that leads from the initial state to the goal state.

A

Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solution that has the lowest path cost among all solutions.

A

Optimal Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Node - a data structure that contains the following:

A
  • A state
  • Its parent node
  • Action
  • Path cost
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The mechanism that “manages” the nodes

A

Frontier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Exhausts each one direction before trying another direction

A

Depth-First Search Algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In depth-first search, the frontier is managed as a ____

A

stack data structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pros and Cons of depth-first search

A

Pros:
1. At best, this algorithm is the fastest. If it “lucks out” and always chooses the right path to the solution (by chance), then depth-first search takes the least possible time to get to a solution.
Cons:
1. It is possible that the found solution is not optimal.
2. At worst, this algorithm will explore every possible path before finding the solution, thus taking the longest possible time before reaching the solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Opposite of depth-first search

A

breadth-first search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

This algorithm follows multiple directions at the same time, taking one step in each possible direction before taking the second step in each direction

A

breadth-first search algorithm

18
Q

In breadth-first search, the frontier is managed as a ____

A

queue data structure

19
Q

Pros and Cons of breadth-first search

A

Pros:
1. This algorithm is guaranteed to find the optimal solution.
Cons:
1. This algorithm is almost guaranteed to take longer than the minimal time to run.
2. At worst, this algorithm takes the longest possible time to run.

20
Q

A type of algorithm that considers additional knowledge to try to improve its performance

A

informed search algorithm

21
Q

This algorithm do not utilize any knowledge about the problem that they did not acquire through their own exploration

A

uninformed search algorithm

22
Q

Expands the node that is the closest to the goal, as determined by a heuristic function h(n).

A

Greedy best-first search

23
Q

the function that estimates how close to the goal the next node is

A

heuristic function

24
Q

ignores walls and counts how many steps up, down, or to the sides it would take to get from one location to the goal location

A

Manhattan distance

25
Examples of uninformed search algorithm
Breadth-first and depth-first
26
A development of the greedy best-first algorithm
A* Search
27
Considers not only h(n), the estimated cost from the current location to the goal, but also g(n), the cost that was accrued until the current location
A* Search
28
For A* search to be optimal, the heuristic function, h(n), should be:
1. Admissible, or never overestimating the true cost, and 2. Consistent, To put it in an equation form, h(n) is consistent if for every node n and successor node n’ with step cost c, h(n) ≤ h(n’) + c.
29
the algorithm faces an opponent that tries to achieve the opposite goal.
adversarial search
30
A type of algorithm in adversarial search
Minimax
31
represents winning conditions as (-1) for one side and (+1) for the other side.
Minimax
32
Max-value pseudocode
Function Max-Value(state) v = -∞ if Terminal(state): ​ return Utility(state) for action in Actions(state): ​ v = Max(v, Min-Value(Result(state, action))) return v
33
Min-value pseudocode
Function Min-Value(state) v = ∞ if Terminal(state): ​ return Utility(state) for action in Actions(state): ​ v = Min(v, Max-Value(Result(state, action))) return v
34
A way to optimize Minimax
Alpha-beta pruning
35
skips some of the recursive computations that are decidedly unfavorable
Alpha-beta pruning
36
considers only a pre-defined number of moves before it stops, without ever getting to a terminal state.
Depth-limited Minimax
37
Depth-limited minimax relies on
evaluation function
38
estimates the expected utility of the game from a given state, or, in other words, assigns values to states
evaluation function
39
Of the four search algorithms discussed in lecture — depth-first search, breadth-first search, greedy best-first search with Manhattan distance heuristic, and A* search with Manhattan distance heuristic — which one (or multiple, if multiple are possible) could be the algorithm used?
Could only be DFS
40
Between depth first search (DFS) and breadth first search (BFS), which will find a shorter path through a maze?
BFS will sometimes, but not always, find a shorter path than DFS
41
Why is depth-limited minimax sometimes preferable to minimax without a depth limit?
Depth-limited minimax can arrive at a decision more quickly because it explores fewer states
42
Consider the Minimax tree below, where the green up arrows indicate the MAX player and red down arrows indicate the MIN player. The leaf nodes are each labelled with their value. What is the value of the root node?
5