Flashcard Questions - AI

1
Q

An entity that perceives its environment and acts upon that environment.

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A configuration of an agent in its environment.

A

State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The state from which the search algorithm starts.

A

Initial State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Choices that can be made in a state.

A

Actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A description of what state results from performing any applicable action in any state.

A

Transition Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The set of all states reachable from the initial state by any sequence of actions.

A

State Space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The condition that determines whether a given state is a goal state.

A

Goal Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A numerical cost associated with a given path.

A

Path Cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A sequence of actions that leads from the initial state to the goal state.

A

Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solution that has the lowest path cost among all solutions.

A

Optimal Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

contains the following data:
* A state
* Its parent node, through which the current node was generated
* The action that was applied to the state of the parent to get to the current node
* The path cost from the initial state to this node

A

node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

the mechanism that “manages” the nodes

A

frontier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

search algorithm that exhausts each one direction before trying another direction

A

Depth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

search algorithm where the frontier is managed as a stack data structure

A

Depth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

search algorithm that will follow multiple directions at the same time, taking one step in each possible direction before taking the second step in each direction.

A

Breadth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

search algorithm where the frontier is managed as a queue data structure

A

Breadth-First Search

17
Q

A type of algorithm that considers additional knowledge to try to improve its performance

A

Informed Search Algorithm

18
Q

search algorithm that expands the node that is the closest to the goal, as determined by a heuristic function h(n).

A

Greedy Best-First Search

19
Q

ignores walls and counts how many steps up, down, or to the sides it would take to get from one location to the goal location

A

Manhattan Distance

20
Q

function that estimates how close to the goal the next node is, but it can be mistaken.

A

heuristic function

21
Q

The efficiency of the greedy best-first algorithm depends on

A

how good a heuristic function is

22
Q

considers not only h(n), the estimated cost from the current location to the goal, but also g(n), the cost that was accrued until the current location.

A

A* Search

23
Q

For a* search to be optimal, the heuristic function has to be:

A

Admissible & Consistent

24
Q

In the heuristic function h(n) of an A* search algorithm, it is consistent if

A

for every node n and successor node n’ with step cost c, n ≤ n’ + c

25
Q

In the heuristic function h(n) of an A* search algorithm, what does it mean to be admissible?

A

Never overestimates the actual cost to reach a goal from any node

26
Q

algorithm that faces an opponent that tries to achieve the opposite goal.

A

Adversarial Search

27
Q

represents winning conditions as (-1) for one side and (+1) for the other side.

A

Minimax

28
Q

Recursively, the algorithm simulates all possible games that can take place beginning at the current state and until a terminal state is reached. Each terminal state is valued as either (-1), 0, or (+1).

A

Minimax

29
Q

an optimization technique for minimax that skips some of the recursive computations that are decidedly unfavorable

A

Alpha-Beta Pruning

30
Q

Minimax that considers only a pre-defined number of moves before it stops, without ever getting to a terminal state.

A

Depth-Limited Minimax

31
Q

estimates the expected utility of the game from a given state, or, in other words, assigns values to states.

A

Evaluation function