1 - Intelligent Agents Flashcards
Fully observable
Agent can detect the complete state of the environment ( Partially observable)
Partially observable
Agent cannot observe the complete state of the environment ( Fully observable)
Multi Agent Environment
Environment contains multiple agents ( Single Agent Environment)
Deterministic
Environment is deterministic if its next state is fully determined by its current state and the action of the agent ( Stochastic)
Episodic
Actions that are taken in one episode do not affect later episodes ( Sequential)
Static Environment
Environment only changes based on actions of the agent (Dynamic)
Dynamic Environment
Environment can change independent from actions of the agent
Simple Reflex Agent
Agent that selects its actions based on its current perception
Model-Based Reflex Agent
Extension of the simple reflex agent that is able to handle partial observability by keeping track of the environment
Goal-Based Agent
Extension of the Model-Based Reflex Agent that considers a goal explicitly
Utility-Based Agent
Extension of Goal-Based Reflex Agent that tries to achieve a goal with maximized utility
Learning Agent
Extends any agent by a learning element that makes improvements based on experience
Rational Agent
Aims to maximize its performance measure/expected performance
Utility Function
Used internally by the robot to evaluate the best course of action
PEAS
Specification of task environment: Performance - Environment - Activators - Sensors