02 Intelligent Agents Flashcards
An intelligent agent is anything that:
i)
ii)
perceives its environment through sensors
acts upon the environment through actuators
In control theory, one typically distinguishes between …
In the AI setting, this distinction is often not made.
the system one wants to control and the environment.
Percept sequence
An agent’s percept sequence is the
Vacuum cleaner example: [A,Dirty], [A,Clean], [B,Clean], [A,Clean].
complete history of its perception.
The behavior of an agent can …
Agent function: An agent function maps …
Depending on the length of the percept sequence, the agent can…
be fully described by its agent function
any given percept sequence to an action.
make smarter choices.
Comments on Agent Functions
1. Expressiveness: Tabular agent functions can theoretically describe the behavior of agents.
- Practicality: Tabular functions have no practical use since they are infinite or very large when one only considers finite percept sequences. Examples of table sizes: 1 h recording of a camera (640x480 pixel, 24 bit color): 10250,000,000,000 Chess: 10150
(estimated number of atoms in the universe: 1080) - Solution: …
- What is a good agent function? This question is answered by the concept of rational agents
An agent program is a practical implementation of an agent function.
Rational Agent (I)
Rationality: A system is rational if it does the “right thing”, i.e., has an ideal performance.
An obvious performance measure is not always available. A designer has to find an acceptable measure.
Example vacuum cleaner: (2 options)
Option 1: Amount of dirt cleaned up in a certain amount of time. Problem: An optimal solution could be to clean up a section, dump the dirt on it, clean it up again, and so on.
Option 2: Reward clean floors by providing points for each clean floor at each time step.
What is rational at any given time depends on four things. What are they?
Rational Agent
For each possible percept sequence, a rational agent should select an action that is …, given the prior percept sequence and its built-in knowledge.
the performance measure
the agent’s prior knowledge of the environment
the actions that the agent can perform
the agent’s percept sequence up to now.
expected to maximize its performance measure
Omniscient agent
An omniscient agent knows …
Example: Just imagine you know the outcome of betting money on something.
A rational agent (≠ omniscient agent) maximizes…
Learning
Rational agents are able to …
Autonomy
In AI, a rational agent is considered more autonomous if it is less dependent on …
the actual outcome of its actions, which is impossible in reality.
expected performance.
learn from perception, i.e., they improve their knowledge of the environment over time.
prior knowledge and uses newly learned abilities instead.
To design a rational agent, we have to specify the task environment. We use the PEAS (performance, environment, actuators, sensors) description
Give examples of PEAS for an automated taxi.
Performance: safety, time, profits, legality, comfort
Environment: streets, traffic, pedestrians, weather
Actuators: steering, accelerator, brake, horn, speaker/display
Sensor: video, accelerometers, radar, GPS, lidar
Give possible PEAS for an Internet Shopping Agent.
P: price, quality, appropriateness, efficiency
E: websites, vendors, shippers
A: display to user, follow URL, fill in form
S: HTML pages
What are the properties of task environments?
Fully observable vs Partially observable
Single agent vs Multi agent
Deterministic vs Stochastic
Episodic vs Sequential
Discrete vs Continuous
Static vs Dynamic
Known vs Unknown
Explain the following properties of task environments:
Observability
Single vs Multi agent
Deterministic vs Stochastic
Episodic vs Sequential
- An environment is fully observable if the agent can detect the complete state of the environment, and partially observable otherwise.
Example: The vacuum-cleaner world is partially observable since the robot only knows whether the current square is dirty. - An environment is a multi agent environment if it contains several agents, and a single agent environment otherwise.
Example: The vacuum-cleaner world is a single agent environment. A chess game is a two-agent environment. - An environment is deterministic if its next state is fully determined by its current state and the action of the agent, and stochastic otherwise.
Example: The automated taxi driver environment is stochastic since the behavior of other traffic participants is unpredictable. The outcome of a calculator is deterministic. - An environment is episodic if the actions taken in one episode (in which the robot senses and acts) does not affect later episodes, and sequential otherwise.
Example: Detecting defective parts on a conveyor belt is episodic. Chess and automated taxi driving are sequential.
Explain the following properties of task environments:
Discrete vs Continuous
Static vs Dynamic
Known vs Unknown
- The discrete/continuous distinction applies to the state and the time: continuous state + continuous time: e.g., robot
continuous state + discrete time: e.g., weather station
discrete state + continuous time: e.g., traffic light control discrete state + discrete time: e.g., chess - If an environment only changes based on actions of the agent, it is static, and dynamic otherwise.
Example: The automated taxi driver environment is dynamic. A crossword puzzle is static. - An environment is known if the agent knows the outcomes (or outcome probabilities) of its actions, and unknown otherwise. In the latter case, the agent has to learn the environment first.
Example: The agent knows all the rules of a card game it should play, thus it is in a known environment.
Agent Types
Besides categorizing the task environment, we also categorize agents in four categories with increasing generality:
…
All these can be turned into learning agents.
simple reflex agents, reflex agents with state, goal-based agents, utility-based agents.
Explain: simple reflex agents, reflex agents with state, goal-based agents, utility-based agents.
Slides;)