Module 2: Intelligent Agents Flashcards

1
Q

An ___ is anything that perceiving its environment through sensors and acting upon that environment through actuators.

A

agent

Example:
* Human is an agent
* A robot is also an agent with cameras and motors
* A thermostat detecting room temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

___ has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators.

A

Human agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

___ can have cameras, infrared range finder, NLP for sensors and various motors for actuators.

A

Robotic agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

___ can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen.

A

Software agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An agent is anything that perceiving its environment through ___ and acting upon that environment through ___.

A

sensors, actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

___ is a device which detects the change in the environment and sends the information to other electronic devices.

A

Sensor

An agent observes its environment through sensors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

___ are the component of machines that converts energy into motion.

A

Actuators

The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

___ are the devices which affect the environment. These can be legs, wheels, arms, fingers, wings, fins, and display screen.

A

Effectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An ___ is a program that can make decisions or perform a service based on its environment, user input and experiences.

A

intelligent agent

These programs can be used to autonomously gather information on a regular, programmed schedule or when prompted by the user in real time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An intelligent agent is a program that can make ___ or perform a service based on its environment, user input and experiences.

A

decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Intelligent agents may also be referred to as a ___, which is short for robot.

A

bot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An ___ is an autonomous entity which act upon an environment using sensors and actuators for achieving goals.

A

intelligent agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An intelligent agent may learn from the ___ to achieve their goals.

A

environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The main four rules for an AI agent

A
  • Rule 1: An AI agent must have the ability to perceive the environment.
  • Rule 2: The observation must be used to make decisions.
  • Rule 3: Decision should result in an action.
  • Rule 4: The action taken by an AI agent must be a rational action.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Agent’s perceptual inputs at any given instant

A

Percept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Complete history of everything that the agent has ever perceived.

A

Percept sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Agent’s behavior is mathematically described by ___.

A

Agent function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Agent’s behavior is ___ described by agent function.

A

mathematically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A function mapping any given percept sequence to an action

A

Agent function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Agent’s behavior is ___ described by agent program.

A

practically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Agent’s behavior is practically described by ___.

A

Agent program

The real implementation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A ___ is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions.

A

rational agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A ___ is said to perform the right things.

A

rational agent

AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

For an AI agent, the rational action is most important because in AI ___ algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.

A

reinforcement learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Rational agents in AI are very similar to ___.

A

intelligent agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

One that does the right thing

A

Rational agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Every entry in the table for the agent function is correct

A

Rationality

Rational agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is correct?

The actions that cause the agent to be most successful, so, we need ways to measure ___.

A

success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

An objective function that determines how the agent does successfully

A

Performance measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

The rationality of an agent is measured by its ___.

A

performance measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Rationality can be judged on the basis of these points

A
  • Performance measure which defines the success criterion.
  • Agent prior knowledge of its environment.
  • Best possible actions that an agent can perform.
  • The sequence of percepts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Rationality differs from ___ because an Omniscient agent knows the actual outcome of its action and act accordingly, which is not possible in reality.

A

Omniscience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A rational agent should select an action expected to maximize its ___, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

A

performance measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

An ___ knows the actual outcome of its actions in advance, with no other possible outcomes. However, impossible in real world.

A

omniscient agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

The task of AI is to design an agent program which implements the ___.

A

agent function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

The structure of an intelligent agent is a combination of ___ and ___.

A

architecture, agent program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

___ is machinery that an AI agent executes on.

A

Architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

___ is used to map a percept to an action.

A

Agent function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

___ is an implementation of agent function.

A

Agent program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

An ___ executes on the physical architecture to produce function f.

f:P* → A

A

agent program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

___ is when after experiencing an episode, the agent adjusts its behaviors to perform better for the same job next time.

A

Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Does a rational agent depend on only current percept?

No, the past ___ should also be used.

A

percept sequence

This is called learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks ___.

A

autonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

A rational agent should be ___ - it should learn what it can to compensate for partial or incorrect prior knowledge.

A

autonomous

E.g., a clock
* No input (percepts)
* Run only but its own algorithm (prior knowledge)
* No learning, no experience, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Sometimes, the environment may not be the real world. They are all artificial but very complex environments. Those agents working in these environments are called ___.

A

Software agent (softbots)

Because all parts of the agent are software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Task environments are the ___ while the rational agents are the ___.

A

problems, solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

___ environments are the problems while the ___ agents are the solutions.

A

Task [environments], rational [agents]

49
Q

In designing an agent, the first step must always be to specify the ___ as fully as possible.

A

task environment

50
Q

___ is a type of model on which an AI agent works upon.

51
Q

When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words:

P: ___
E: ___
A: ___
S: ___

A

Performance measure
Environment
Actuators
Sensors

Let’s suppose a self-driving car then PEAS representation will be:

  • Performance: Safety, time, legal drive, comfort
  • Environment: Roads, other vehicles, road signs, pedestrian
  • Actuators: Steering, accelerator, brake, signal, horn
  • Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
52
Q

An ___ is everything in the world which surrounds the agent, but it is not a part of an agent itself.

A

environment

53
Q

An environment can be described as a situation in which an agent is ___.

54
Q

The ___ is where agent lives, operate and provide the agent with something to sense and act upon it.

A

environment

55
Q

An environment is mostly said to be ___.

A

non-feministic

56
Q

As per ___ and ___, an environment can have various features from the point of view of an agent:

  • Fully observable vs Partially Observable
  • Static vs Dynamic
  • Discrete vs Continuous
  • Deterministic vs Stochastic
  • Single-agent vs Multi-agent
  • Episodic vs sequential
  • Known vs Unknown
  • Accessible vs Inaccessible
A

Russell, Norvig

57
Q

If an agent sensor can sense or access the complete state of an environment at each point of time then it is a ___ environment, else it is ___.

A

fully observable, partially observable

58
Q

A ___ environment is easy as there is no need to maintain the internal state to keep track history of the world.

A

fully observable

59
Q

An agent with no sensors in all environments then such an environment is called as ___.

A

unobservable

60
Q

If an agent’s current state and selected action can completely determine the next state of the environment, then such environment is called a ___ environment.

A

deterministic

61
Q

A ___ environment is random in nature and cannot be determined completely by an agent.

A

stochastic

62
Q

In a ___, ___ environment, agent does not need to worry about uncertainty.

A

deterministic, fully observable

63
Q

If next state of the environment is completely determined by the current state and the actions executed by the agent, then the environment is ___, otherwise, it is ___.

A

deterministic, stochastic

64
Q

An environment that is deterministic except for actions of other agents

A

Strategic environment

65
Q

In an ___ environment, there is a series of one-shot actions, and only the current percept is required for the action.

66
Q

In ___ environment, an agent requires memory of past actions to determine the next best actions.

A

Sequential

67
Q

Agent’s single pair of perception and action

68
Q

The quality of the agent’s action does not depend on other episodes, making every episode independent of each other.

A

Episodic

Episodic environment is simpler as the agent does not need to think ahead.

69
Q

An environment where the current action may affect all future decisions

A

Sequential

70
Q

A ___ environment is always changing over time

71
Q

An environment is not changed over time but the agent’s performance score does

A

Semidynamic

72
Q

If the environment can change itself while an agent is deliberating then such environment is called a ___ environment else it is called a ___ environment.

A

dynamic, static

73
Q

___ environments are easy to deal because an agent does not need to continue looking at the world while deciding for an action.

74
Q

In ___ environment, agents need to keep looking at the world at each action.

A

dynamic

Taxi driving is an example of a dynamic environment whereas crossword puzzles are an example of a static environment.

75
Q

If in an environment there are a finite number of percepts and actions that can be performed within it, then such an environment is called a ___ environment, else it is called ___ environment.

A

discrete, continuous

76
Q

If there are a limited number of distinct states, clearly defined percepts and actions, the environment is ___.

77
Q

A chess game comes under ___ environment as there is a finite number of moves that can be performed.

78
Q

If only one agent is involved in an environment, and operating by itself then such an environment is called ___ environment.

A

single agent

79
Q

If multiple agents are operating in an environment, then such an environment is called a ___ environment.

A

multi-agent

80
Q

The agent ___ problems in the multi-agent environment are different from single agent environment.

81
Q

___ and ___ are not actually a feature of an environment, but it is an agent’s state of knowledge to perform an action.

A

Known, unknown

82
Q

Known and unknown are not actually a feature of an environment, but it is an agent’s or designer’s ___ to perform an action.

A

state of knowledge

83
Q

In ___ environment, the outcomes for all actions are given.

84
Q

If the environment is ___, the agent will have to learn how it works in order to perform an action and make good decisions.

85
Q

It is quite possible that a known environment to be ___ and an unknown environment to be ___.

A

partially observable, fully observable

86
Q

Some sort of computing device (sensors
+ actuators)

A

Architecture

87
Q

Agent = ___ + ___

A

architecture, program

88
Q

Some function that implements the
agent mapping = “?”

A

(Agent) Program

89
Q

Job of AI

A

Agent Program

90
Q

If an agent can obtain complete and accurate information about the state’s environment, then such an environment is called an ___ environment else it is called ___.

A

Accessible, inaccessible

91
Q

An empty room whose state can be defined by its temperature is an example of an ___ environment.

A

accessible

92
Q

Information about an event on earth is an example of ___ environment.

A

Inaccessible

93
Q

Input for Agent Program

A

Only the current percept

94
Q

Input for Agent Function

A

The entire percept sequence

The agent must remember all of them.

95
Q

Implement the agent program as a ___

A

look up table (agent function)

96
Q

Types of agent programs

A
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents
97
Q

It uses just condition-action rules

A

Simple reflex agents

98
Q

Simple reflex agents use ___ rules

A

condition-action

99
Q

Simple reflex agents work only if the environment is ___.

A

fully observable

100
Q

Simple reflex agents are efficient but have narrow range of ___ because knowledge sometimes cannot be stated explicitly.

A

applicability

101
Q

Model-based Reflex Agents are for the world that is ___.

A

partially observable

102
Q

Agent that has to keep track of an internal state that depends on the percept history, reflecting some of the unobserved aspects.

A

Model-based Reflex Agents

103
Q

Model-based Reflex Agents require two types of knowledge

A
  • How the world evolves independently of the agent
  • How the agent’s actions affect the world
104
Q

An agent where the current state of the environment is always not enough for it, the goal is another issue to achieve.

A

Goal-based agents

Judgment of rationality / correctness

105
Q

Goal-based agents choose actions (goals) based on the ___ and ___.

A

current state, current percept

106
Q

Goal-based agents are less ___ but more ___.

A

efficient, flexible

Agent <— Different goals <— different tasks

107
Q

Two other sub-fields in AI

A

Search and planning

108
Q

[Goal-based agents] To find out the action sequences to achieve its goal

A

Search and planning

109
Q

An agent where goals alone are not enough to generate high-quality behavior.

A

Utility-based agents

110
Q

If goal means success, then ___ means the degree of success.

111
Q

[Utility-based agents] It is said state A has higher ___ if state A is more preferred than others.

112
Q

Utility is therefore a function that maps a state onto a real number; the degree of ___.

113
Q

Utility has several advantages:

When there are conflicting goals, only some of the goals but not all can be achieved; utility describes the appropriate ___.

114
Q

When there are several goals, none of the goals are achieved certainly, so, utility provides a way for the ___.

A

decision-making

115
Q

After an agent is programmed, can it work immediately?

No, it still need teaching. In AI, once an agent is done, we teach it by giving it a set of examples. We test it by using another set of examples.

We then say the agent ___.

A

learns (Learning Agent)

116
Q

Learning Agents has four conceptual components:

A
  • Learning element
  • Making improvement
  • Performance element
  • Selecting external actions
117
Q

Learning Agents

Tells the Learning element how well the agent is doing with respect to fixed performance standard.

A

Critic

Feedback from user or examples, good or not?

118
Q

Learning Agents

Suggest actions that will lead to new and informative experiences.

A

Problem generator