Lecture 3 Flashcards

1
Q

States

A

They describe the configuration of the environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Actions

A

Move an agent from one state to another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Transition model (Successor function)

A

Describes what each action does

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

State space

A

The set of states reachable from the initial state by applying any sequence of actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between a goal state and a solution?

A

Goal state is the destination, Solution includes the path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When is a search solution optimal?

A

When it has the lowest path cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Think through the states, actions, goal test, and path cost for the following.

8-puzzle
Robotics assembly
Pacman

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s the difference between the search state and the world state?

A

The world state includes every detail of the environment, search state only keeps details needed for problem solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Can state space graphs usually be built?

A

No, usually too many states to build, not enough memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the difference between a search tree and state tree?

A

In a state space graph each state only occurs once. A state tree is meant to model possible futures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly