Search Flashcards

1
Q

Agent

A

An entity that perceives its environment and acts upon that environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

State

A

A configuration of the agent and its environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Initial State

A

The state which the AI begins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Actions

A

Choices that can be made in a state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Actions(s)

A

A function that returns the set of actions that can be executed in state s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Transition Model

A

A description of what state results from performing any applicable action in any state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Results(s, a)

A

Returns the state resulting from performing action a in state s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

State Space

A

The set of all states reachable from the initial state by any sequence of actions
-Each node represents a state and an edge is how we can get to that state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Goal Test

A

A way to determine whether a given state is a goal state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Path Cost

A

Numerical cost associated with a given path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Nodes for Search

A

A node for a search needs to track a
1. State
2. Parent
3. Action
4. Path cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Frontier

A

The stack of queue where our comparing nodes are looking at

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Depth First Search

A

Search algorithm that always expands the deepest node in the frontier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Breadth First Search

A

Search algorithm that always expands the shallowest node in the frontier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Uninformed Search

A

Search strategies that uses no problem specific knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DPS vs BFS

A

DPS is a stack
BFS is a queue

16
Q

Informed Search

A

Search strategy that uses problem-specific knowledge to find solutions more efficiently

17
Q

What could go wrong with searching nodes?

A

Imagine two nodes with a path to each one, you end up looping between those two never finding your goal

18
Q

Approach to Searching

A
  • Start with a frontier that contains the initial state
    - Frontier is a stack
    • Start with an empty explored set/list/array
    • Repeat: If the frontier is empty, then no solution (exit failure)
      • Else, remove a node from the frontier
        • If node contains goal state then return the solution (exit success)
        • Else, add node to the explored set
      • Expand node, add resulting nodes to the frontier
19
Q

Greedy Best-First Search

A

Search algorithm that expands the node that is closest to the goal, as estimated by a heuristic function h(n)

20
Q

Why is a greedy best-first search not always ideal

A

It might appear closer to the goal, but there could be an obstacle. The way it calculates the distance from the goal doesn’t calculate if theres a wall between or deadend.

Thus you might not find the most optimal search

21
Q

A* Search

A

Search algorithm that expands node with lowest value of g(n) + h(n)

g(n) cost to reach node (increments as we go about the node)

h(n) estimated cost to goal

22
Q

How is A* Search Optimal

A

h(n) is admissible (never overestimates the true cost)

h(n) is consistent

(for every node n and successor n’ with step cost c, h(n) <= h(n’) + c)

23
Q

What are the drawbacks of A* Search?

A

It uses a lot of memory

24
Adversarial Search
Typically in games like tic tac toe or chess - There is an opposing force trying to prevent you from reaching your goal
25
Minimax Search
Used for adversarial searches - turned based - You either try to maximize or minimize your score by searching for each possible action and whether that would get you closer to your goals
26
Alpha-Beta Pruning
Eliminating possible choices to increase performance Example: If you pick Action A and it results in a score of 5, 10, or 6 - Max you would get is 5 if your opponent is smart and wants you to have the smallest result - Action B results in 4, 20, 3. We can stop once we see 4 because no matter what, our opponent will pick a value lower than 5 if they are playing smart Thus we don't need to look at all the possible results of action B after finding 4
27
Depth-Limited Minimax
As the game we play gets more complex, we try to limit the results by looking at the best moves for a certain amount of time
28
Evaluation Function
Instead of minimax giving a 1 or -1 based on win or loss, we instead have the path to the win or loss based on smaller values like this move is 0.8 or that move is -0.6