Module 1 & 2 Flashcards
What is AI?
System that acts rationally, thinks like humans, thinks rationally, acts like humans
What’s an Agent
Entity that perceives and acts
What’s a rational agent
Agent that selects action to maximize utility or value of performance measure
List characteristics of an agent and environment
Agent perceives through environment and actuators help agent act in the environment
What’s an agent function
Maps percept to the action, takes in current percept to determine action
What’s agent function dependent on
Machine and agent problem
Can all agent functions be implemented by an agent problem?
No
What does rational depend on
Performance measure, Agents prior knowledge, Agents actions, percept history
What does performance measure evaluate
Environment sequence
Can Rational agents explore and learn? Autonomous? Omniscient?
Autonomous, not omniscient, may explore and learn
What is the task environment
Performance measure, Environment, Actions, Sensors (PEAS)
Environment types
Ok
Fully observable vs partially
Full observable - state has all info of environment relevant to task
Partially observable - does not have all info. Agent needs memory
Single Agent vs Multi agent
How many agents in env, how do their actions affect us?
Multi agent - agent may behave randomly
Deterministic vs stochastic domain
Deterministic - resulting state depends on action and state. Certainty
Stochastic - uncertainty in the resulting state, probability involved. Agent needs contingencies
Episodic domain vs Sequential domain
Episodic - actions are independent of previous actions. No consequence
Sequential - current choice will affect future actions, long term consequences
Static vs dynamic domain
Static - environment doesn’t change agent has time for decision
Dynamic - environment changes
Discrete vs continuous
Discrete - limited number of distinct and defined states , precepts, actions and steps .
Otherwise continuous - agent needs operating controller
Agent Types
Ok
Simple reflex
Selects actions on basis of current percept, ignores percept history.
Environment needs to be full observable
Reflex agents with states
Keeps memory state of percept history
Internal state holds transition model and sensor model.
What is transition model
How the world works
What is sensor model
How state of world is reflected in agents percept
Goal based agents and pro / con
Agents require goal info describing desirable situations
Pro - knowledge that supports decision is explicit
Con - cant handle trade off or uncertainty / probability
Utility based agents
Goals are not enough since they can be non optimal
Uses utility function to determine best action
Pro - computes expected value for actions and handles uncertainty
Con - cant easily index into actions
What is atomic spectrum
Each state is indivisible
What is factored spectrum
Splits state into fixed set of variables or attributes with their own value
What is structured spectrum
Objects and their various relationships can be described explicitly