Agents and State Spaces Flashcards
Rational Agent
Chooses action that maximizes the expected performance measure given sequence of observed perceptions
Performance Measure
Evaluates sequence of configurations of environment, chosen by creator
Unknown VS Known Environment
Whether environment is known beforehand
Types of Known Environments
- Observable vs Fully/Partially
- Single-Agent vs Multi
- Deterministic vs Stochastic
- Episodic vs Sequential
- Static vs Dynamic
- Discrete vs Continuous
Observable vs Fully/Partially Environment
Can agent sense all relevant aspects of the environment directly?
Single vs Multi Agent Environment
More than one agent acting at same time? Based on intention
Deterministic vs Stochastic Environment
Do actions always have same outcome?
Static vs Dynamic Environment
Can environment change while agent is thinking?
Discrete vs Continuous Environment
Is there a finite number of possible ways to get to the goal?
Agents Architectures
- Table Driven Agent
- Simple Reflex Agent
- Model-Based Reflex Agent
- Goal-Based Agent
- Utility-Based Agent
- Learning Agent
Table Driven Agent
Fills in table that holds all possible actions using perception history as key, can use table for future actions, inefficient
Simple Reflex Agent
Uses table of rules and current state as key to lookup action
State
Encoding of current configuration of the environment
Model-Based Reflex Agent
Handles partial-observability.
Uses sensors, state, and known physics of the world as a key in rule table to lookup action
Goal-Based Agent
Handles sequential environments.
Plans one step into the future.
Uses sensors, state, and known physics of the world to predict future states which are used as a key in GOAL table to lookup action