Agents and State Spaces Flashcards

1
Q

Rational Agent

A

Chooses action that maximizes the expected performance measure given sequence of observed perceptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

Performance Measure

A

Evaluates sequence of configurations of environment, chosen by creator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Unknown VS Known Environment

A

Whether environment is known beforehand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Types of Known Environments

A
  1. Observable vs Fully/Partially
  2. Single-Agent vs Multi
  3. Deterministic vs Stochastic
  4. Episodic vs Sequential
  5. Static vs Dynamic
  6. Discrete vs Continuous
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Observable vs Fully/Partially Environment

A

Can agent sense all relevant aspects of the environment directly?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Single vs Multi Agent Environment

A

More than one agent acting at same time? Based on intention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Deterministic vs Stochastic Environment

A

Do actions always have same outcome?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Static vs Dynamic Environment

A

Can environment change while agent is thinking?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Discrete vs Continuous Environment

A

Is there a finite number of possible ways to get to the goal?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Agents Architectures

A
  1. Table Driven Agent
  2. Simple Reflex Agent
  3. Model-Based Reflex Agent
  4. Goal-Based Agent
  5. Utility-Based Agent
  6. Learning Agent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Table Driven Agent

A

Fills in table that holds all possible actions using perception history as key, can use table for future actions, inefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Simple Reflex Agent

A

Uses table of rules and current state as key to lookup action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

State

A

Encoding of current configuration of the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Model-Based Reflex Agent

A

Handles partial-observability.
Uses sensors, state, and known physics of the world as a key in rule table to lookup action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Goal-Based Agent

A

Handles sequential environments.
Plans one step into the future.
Uses sensors, state, and known physics of the world to predict future states which are used as a key in GOAL table to lookup action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Utility-Based Agent

A

Handles stochastic environments.
Uses probability to find expected ‘goodness’ of future states.
Uses sensors, state, and known physics of the world to predict future states which are used as a key in UTILITY table to lookup action

16
Q

Learning Agent

A

Handles unknown environments.
Sensors give info to performance element that then uses the critic, learning element, and problem generator to determine action

17
Q

Performance Element

A

Evaluates states/actions, provides info about environment

18
Q

Learning Element

A

Improves internal model

19
Q

Problem Generator

A

Suggests actions to improve learning

20
Q

Critic

A

Evaluates overall performance of agent

21
Q

‘Physics’ of Environment

A

Dictate how state changes over time in response to agent actions

22
Q

State Space

A

Set of all possible states

23
Q

State Space - Graph Form

A

For problems with finite number of states, can represent entire state space and transitions with a graph