Module 2: Intelligent Agents Flashcards
An ___ is anything that perceiving its environment through sensors and acting upon that environment through actuators.
agent
Example:
* Human is an agent
* A robot is also an agent with cameras and motors
* A thermostat detecting room temperature
___ has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators.
Human agent
___ can have cameras, infrared range finder, NLP for sensors and various motors for actuators.
Robotic agent
___ can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen.
Software agent
An agent is anything that perceiving its environment through ___ and acting upon that environment through ___.
sensors, actuators
___ is a device which detects the change in the environment and sends the information to other electronic devices.
Sensor
An agent observes its environment through sensors.
___ are the component of machines that converts energy into motion.
Actuators
The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.
___ are the devices which affect the environment. These can be legs, wheels, arms, fingers, wings, fins, and display screen.
Effectors
An ___ is a program that can make decisions or perform a service based on its environment, user input and experiences.
intelligent agent
These programs can be used to autonomously gather information on a regular, programmed schedule or when prompted by the user in real time.
An intelligent agent is a program that can make ___ or perform a service based on its environment, user input and experiences.
decisions
Intelligent agents may also be referred to as a ___, which is short for robot.
bot
An ___ is an autonomous entity which act upon an environment using sensors and actuators for achieving goals.
intelligent agent
An intelligent agent may learn from the ___ to achieve their goals.
environment
The main four rules for an AI agent
- Rule 1: An AI agent must have the ability to perceive the environment.
- Rule 2: The observation must be used to make decisions.
- Rule 3: Decision should result in an action.
- Rule 4: The action taken by an AI agent must be a rational action.
Agent’s perceptual inputs at any given instant
Percept
Complete history of everything that the agent has ever perceived.
Percept sequence
Agent’s behavior is mathematically described by ___.
Agent function
Agent’s behavior is ___ described by agent function.
mathematically
A function mapping any given percept sequence to an action
Agent function
Agent’s behavior is ___ described by agent program.
practically
Agent’s behavior is practically described by ___.
Agent program
The real implementation
A ___ is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions.
rational agent
A ___ is said to perform the right things.
rational agent
AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI ___ algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.
reinforcement learning
Rational agents in AI are very similar to ___.
intelligent agents
One that does the right thing
Rational agent
Every entry in the table for the agent function is correct
Rationality
Rational agent
What is correct?
The actions that cause the agent to be most successful, so, we need ways to measure ___.
success
An objective function that determines how the agent does successfully
Performance measure
The rationality of an agent is measured by its ___.
performance measure
Rationality can be judged on the basis of these points
- Performance measure which defines the success criterion.
- Agent prior knowledge of its environment.
- Best possible actions that an agent can perform.
- The sequence of percepts.
Rationality differs from ___ because an Omniscient agent knows the actual outcome of its action and act accordingly, which is not possible in reality.
Omniscience
A rational agent should select an action expected to maximize its ___, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
performance measure
An ___ knows the actual outcome of its actions in advance, with no other possible outcomes. However, impossible in real world.
omniscient agent
The task of AI is to design an agent program which implements the ___.
agent function
The structure of an intelligent agent is a combination of ___ and ___.
architecture, agent program
___ is machinery that an AI agent executes on.
Architecture
___ is used to map a percept to an action.
Agent function
___ is an implementation of agent function.
Agent program
An ___ executes on the physical architecture to produce function f.
f:P* → A
agent program
___ is when after experiencing an episode, the agent adjusts its behaviors to perform better for the same job next time.
Learning
Does a rational agent depend on only current percept?
No, the past ___ should also be used.
percept sequence
This is called learning.
If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks ___.
autonomy
A rational agent should be ___ - it should learn what it can to compensate for partial or incorrect prior knowledge.
autonomous
E.g., a clock
* No input (percepts)
* Run only but its own algorithm (prior knowledge)
* No learning, no experience, etc.
Sometimes, the environment may not be the real world. They are all artificial but very complex environments. Those agents working in these environments are called ___.
Software agent (softbots)
Because all parts of the agent are software.
Task environments are the ___ while the rational agents are the ___.
problems, solutions
___ environments are the problems while the ___ agents are the solutions.
Task [environments], rational [agents]
In designing an agent, the first step must always be to specify the ___ as fully as possible.
task environment
___ is a type of model on which an AI agent works upon.
PEAS
When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words:
P: ___
E: ___
A: ___
S: ___
Performance measure
Environment
Actuators
Sensors
Let’s suppose a self-driving car then PEAS representation will be:
- Performance: Safety, time, legal drive, comfort
- Environment: Roads, other vehicles, road signs, pedestrian
- Actuators: Steering, accelerator, brake, signal, horn
- Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
An ___ is everything in the world which surrounds the agent, but it is not a part of an agent itself.
environment
An environment can be described as a situation in which an agent is ___.
present
The ___ is where agent lives, operate and provide the agent with something to sense and act upon it.
environment
An environment is mostly said to be ___.
non-feministic
As per ___ and ___, an environment can have various features from the point of view of an agent:
- Fully observable vs Partially Observable
- Static vs Dynamic
- Discrete vs Continuous
- Deterministic vs Stochastic
- Single-agent vs Multi-agent
- Episodic vs sequential
- Known vs Unknown
- Accessible vs Inaccessible
Russell, Norvig
If an agent sensor can sense or access the complete state of an environment at each point of time then it is a ___ environment, else it is ___.
fully observable, partially observable
A ___ environment is easy as there is no need to maintain the internal state to keep track history of the world.
fully observable
An agent with no sensors in all environments then such an environment is called as ___.
unobservable
If an agent’s current state and selected action can completely determine the next state of the environment, then such environment is called a ___ environment.
deterministic
A ___ environment is random in nature and cannot be determined completely by an agent.
stochastic
In a ___, ___ environment, agent does not need to worry about uncertainty.
deterministic, fully observable
If next state of the environment is completely determined by the current state and the actions executed by the agent, then the environment is ___, otherwise, it is ___.
deterministic, stochastic
An environment that is deterministic except for actions of other agents
Strategic environment
In an ___ environment, there is a series of one-shot actions, and only the current percept is required for the action.
episodic
In ___ environment, an agent requires memory of past actions to determine the next best actions.
Sequential
Agent’s single pair of perception and action
Episode
The quality of the agent’s action does not depend on other episodes, making every episode independent of each other.
Episodic
Episodic environment is simpler as the agent does not need to think ahead.
An environment where the current action may affect all future decisions
Sequential
A ___ environment is always changing over time
dynamic
An environment is not changed over time but the agent’s performance score does
Semidynamic
If the environment can change itself while an agent is deliberating then such environment is called a ___ environment else it is called a ___ environment.
dynamic, static
___ environments are easy to deal because an agent does not need to continue looking at the world while deciding for an action.
Static
In ___ environment, agents need to keep looking at the world at each action.
dynamic
Taxi driving is an example of a dynamic environment whereas crossword puzzles are an example of a static environment.
If in an environment there are a finite number of percepts and actions that can be performed within it, then such an environment is called a ___ environment, else it is called ___ environment.
discrete, continuous
If there are a limited number of distinct states, clearly defined percepts and actions, the environment is ___.
discrete
A chess game comes under ___ environment as there is a finite number of moves that can be performed.
discrete
If only one agent is involved in an environment, and operating by itself then such an environment is called ___ environment.
single agent
If multiple agents are operating in an environment, then such an environment is called a ___ environment.
multi-agent
The agent ___ problems in the multi-agent environment are different from single agent environment.
design
___ and ___ are not actually a feature of an environment, but it is an agent’s state of knowledge to perform an action.
Known, unknown
Known and unknown are not actually a feature of an environment, but it is an agent’s or designer’s ___ to perform an action.
state of knowledge
In ___ environment, the outcomes for all actions are given.
known
If the environment is ___, the agent will have to learn how it works in order to perform an action and make good decisions.
unknown
It is quite possible that a known environment to be ___ and an unknown environment to be ___.
partially observable, fully observable
Some sort of computing device (sensors
+ actuators)
Architecture
Agent = ___ + ___
architecture, program
Some function that implements the
agent mapping = “?”
(Agent) Program
Job of AI
Agent Program
If an agent can obtain complete and accurate information about the state’s environment, then such an environment is called an ___ environment else it is called ___.
Accessible, inaccessible
An empty room whose state can be defined by its temperature is an example of an ___ environment.
accessible
Information about an event on earth is an example of ___ environment.
Inaccessible
Input for Agent Program
Only the current percept
Input for Agent Function
The entire percept sequence
The agent must remember all of them.
Implement the agent program as a ___
look up table (agent function)
Types of agent programs
- Simple reflex agents
- Model-based reflex agents
- Goal-based agents
- Utility-based agents
- Learning agents
It uses just condition-action rules
Simple reflex agents
Simple reflex agents use ___ rules
condition-action
Simple reflex agents work only if the environment is ___.
fully observable
Simple reflex agents are efficient but have narrow range of ___ because knowledge sometimes cannot be stated explicitly.
applicability
Model-based Reflex Agents are for the world that is ___.
partially observable
Agent that has to keep track of an internal state that depends on the percept history, reflecting some of the unobserved aspects.
Model-based Reflex Agents
Model-based Reflex Agents require two types of knowledge
- How the world evolves independently of the agent
- How the agent’s actions affect the world
An agent where the current state of the environment is always not enough for it, the goal is another issue to achieve.
Goal-based agents
Judgment of rationality / correctness
Goal-based agents choose actions (goals) based on the ___ and ___.
current state, current percept
Goal-based agents are less ___ but more ___.
efficient, flexible
Agent <— Different goals <— different tasks
Two other sub-fields in AI
Search and planning
[Goal-based agents] To find out the action sequences to achieve its goal
Search and planning
An agent where goals alone are not enough to generate high-quality behavior.
Utility-based agents
If goal means success, then ___ means the degree of success.
utility
[Utility-based agents] It is said state A has higher ___ if state A is more preferred than others.
utility
Utility is therefore a function that maps a state onto a real number; the degree of ___.
success
Utility has several advantages:
When there are conflicting goals, only some of the goals but not all can be achieved; utility describes the appropriate ___.
trade-off
When there are several goals, none of the goals are achieved certainly, so, utility provides a way for the ___.
decision-making
After an agent is programmed, can it work immediately?
No, it still need teaching. In AI, once an agent is done, we teach it by giving it a set of examples. We test it by using another set of examples.
We then say the agent ___.
learns (Learning Agent)
Learning Agents has four conceptual components:
- Learning element
- Making improvement
- Performance element
- Selecting external actions
Learning Agents
Tells the Learning element how well the agent is doing with respect to fixed performance standard.
Critic
Feedback from user or examples, good or not?
Learning Agents
Suggest actions that will lead to new and informative experiences.
Problem generator