Agents and State Spaces Flashcards
Rational Agent
Chooses action that maximizes the expected performance measure given sequence of observed perceptions
Performance Measure
Evaluates sequence of configurations of environment, chosen by creator
Unknown VS Known Environment
Whether environment is known beforehand
Types of Known Environments
- Observable vs Fully/Partially
- Single-Agent vs Multi
- Deterministic vs Stochastic
- Episodic vs Sequential
- Static vs Dynamic
- Discrete vs Continuous
Observable vs Fully/Partially Environment
Can agent sense all relevant aspects of the environment directly?
Single vs Multi Agent Environment
More than one agent acting at same time? Based on intention
Deterministic vs Stochastic Environment
Do actions always have same outcome?
Static vs Dynamic Environment
Can environment change while agent is thinking?
Discrete vs Continuous Environment
Is there a finite number of possible ways to get to the goal?
Agents Architectures
- Table Driven Agent
- Simple Reflex Agent
- Model-Based Reflex Agent
- Goal-Based Agent
- Utility-Based Agent
- Learning Agent
Table Driven Agent
Fills in table that holds all possible actions using perception history as key, can use table for future actions, inefficient
Simple Reflex Agent
Uses table of rules and current state as key to lookup action
State
Encoding of current configuration of the environment
Model-Based Reflex Agent
Handles partial-observability.
Uses sensors, state, and known physics of the world as a key in rule table to lookup action
Goal-Based Agent
Handles sequential environments.
Plans one step into the future.
Uses sensors, state, and known physics of the world to predict future states which are used as a key in GOAL table to lookup action
Utility-Based Agent
Handles stochastic environments.
Uses probability to find expected ‘goodness’ of future states.
Uses sensors, state, and known physics of the world to predict future states which are used as a key in UTILITY table to lookup action
Learning Agent
Handles unknown environments.
Sensors give info to performance element that then uses the critic, learning element, and problem generator to determine action
Performance Element
Evaluates states/actions, provides info about environment
Learning Element
Improves internal model
Problem Generator
Suggests actions to improve learning
Critic
Evaluates overall performance of agent
‘Physics’ of Environment
Dictate how state changes over time in response to agent actions
State Space
Set of all possible states
State Space - Graph Form
For problems with finite number of states, can represent entire state space and transitions with a graph